Test Report: Hyper-V_Windows 18259

                    
                      540f885a6d6e66248f116de2dd0a4210cbfa2dfa:2024-02-29:33352
                    
                

Test fail (20/247)

x
+
TestAddons/parallel/Registry (65.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 54.2821ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9gbz6" [272e10c3-bb6b-4c25-8a39-52fef4b920c7] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0154564s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fls6q" [c7b49716-3ace-4016-a398-caf565d5c035] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0165261s
addons_test.go:340: (dbg) Run:  kubectl --context addons-268800 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-268800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-268800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.8125338s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-268800 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-268800 ip: (2.5018781s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0229 17:45:46.852728    9408 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-268800 ip"
2024/02/29 17:45:49 [DEBUG] GET http://172.26.58.180:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-268800 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-268800 addons disable registry --alsologtostderr -v=1: (14.3142207s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-268800 -n addons-268800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-268800 -n addons-268800: (12.11653s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-268800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-268800 logs -n 25: (8.3027536s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-201400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-201400                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| delete  | -p download-only-201400                                                                     | download-only-201400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| start   | -o=json --download-only                                                                     | download-only-993600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-993600                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| delete  | -p download-only-993600                                                                     | download-only-993600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| start   | -o=json --download-only                                                                     | download-only-119000 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-119000                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:39 UTC | 29 Feb 24 17:39 UTC |
	| delete  | -p download-only-119000                                                                     | download-only-119000 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:39 UTC | 29 Feb 24 17:39 UTC |
	| delete  | -p download-only-201400                                                                     | download-only-201400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:39 UTC | 29 Feb 24 17:39 UTC |
	| delete  | -p download-only-993600                                                                     | download-only-993600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:39 UTC | 29 Feb 24 17:39 UTC |
	| delete  | -p download-only-119000                                                                     | download-only-119000 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:39 UTC | 29 Feb 24 17:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-229100 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:39 UTC |                     |
	|         | binary-mirror-229100                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:51220                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-229100                                                                     | binary-mirror-229100 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:39 UTC | 29 Feb 24 17:39 UTC |
	| addons  | disable dashboard -p                                                                        | addons-268800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:39 UTC |                     |
	|         | addons-268800                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-268800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:39 UTC |                     |
	|         | addons-268800                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-268800 --wait=true                                                                | addons-268800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:39 UTC | 29 Feb 24 17:45 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-268800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:45 UTC | 29 Feb 24 17:45 UTC |
	|         | -p addons-268800                                                                            |                      |                   |         |                     |                     |
	| ssh     | addons-268800 ssh cat                                                                       | addons-268800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:45 UTC | 29 Feb 24 17:45 UTC |
	|         | /opt/local-path-provisioner/pvc-917e0246-92bc-479c-8100-dad11aec0009_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-268800 ip                                                                            | addons-268800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:45 UTC | 29 Feb 24 17:45 UTC |
	| addons  | addons-268800 addons disable                                                                | addons-268800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:45 UTC | 29 Feb 24 17:46 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-268800 addons disable                                                                | addons-268800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:45 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-268800        | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:46 UTC |                     |
	|         | addons-268800                                                                               |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:39:23
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:39:23.201273    6496 out.go:291] Setting OutFile to fd 896 ...
	I0229 17:39:23.202433    6496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:39:23.202433    6496 out.go:304] Setting ErrFile to fd 900...
	I0229 17:39:23.202633    6496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:39:23.221470    6496 out.go:298] Setting JSON to false
	I0229 17:39:23.224725    6496 start.go:129] hostinfo: {"hostname":"minikube5","uptime":50100,"bootTime":1709178262,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 17:39:23.225007    6496 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 17:39:23.226797    6496 out.go:177] * [addons-268800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 17:39:23.227458    6496 notify.go:220] Checking for updates...
	I0229 17:39:23.227882    6496 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 17:39:23.228485    6496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:39:23.228981    6496 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 17:39:23.229490    6496 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:39:23.230004    6496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:39:23.231255    6496 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:39:28.310932    6496 out.go:177] * Using the hyperv driver based on user configuration
	I0229 17:39:28.311615    6496 start.go:299] selected driver: hyperv
	I0229 17:39:28.311695    6496 start.go:903] validating driver "hyperv" against <nil>
	I0229 17:39:28.311695    6496 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:39:28.352306    6496 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:39:28.353229    6496 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 17:39:28.353229    6496 cni.go:84] Creating CNI manager for ""
	I0229 17:39:28.353229    6496 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 17:39:28.353229    6496 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:39:28.353229    6496 start_flags.go:323] config:
	{Name:addons-268800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-268800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:39:28.354338    6496 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:39:28.355063    6496 out.go:177] * Starting control plane node addons-268800 in cluster addons-268800
	I0229 17:39:28.356175    6496 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 17:39:28.356592    6496 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 17:39:28.356657    6496 cache.go:56] Caching tarball of preloaded images
	I0229 17:39:28.356853    6496 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 17:39:28.356853    6496 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 17:39:28.357239    6496 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\config.json ...
	I0229 17:39:28.357789    6496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\config.json: {Name:mk8affa7a95b2de1a99da1e8d08a4179e1f04421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:39:28.358014    6496 start.go:365] acquiring machines lock for addons-268800: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 17:39:28.358014    6496 start.go:369] acquired machines lock for "addons-268800" in 0s
	I0229 17:39:28.359219    6496 start.go:93] Provisioning new machine with config: &{Name:addons-268800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:addons-268800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 17:39:28.359219    6496 start.go:125] createHost starting for "" (driver="hyperv")
	I0229 17:39:28.360283    6496 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0229 17:39:28.360564    6496 start.go:159] libmachine.API.Create for "addons-268800" (driver="hyperv")
	I0229 17:39:28.360652    6496 client.go:168] LocalClient.Create starting
	I0229 17:39:28.361366    6496 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 17:39:28.668716    6496 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 17:39:28.821038    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 17:39:30.883222    6496 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 17:39:30.883222    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:39:30.883222    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 17:39:32.535165    6496 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 17:39:32.536055    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:39:32.536055    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 17:39:33.949125    6496 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 17:39:33.949125    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:39:33.949225    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 17:39:37.571123    6496 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 17:39:37.571123    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:39:37.573247    6496 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 17:39:37.931144    6496 main.go:141] libmachine: Creating SSH key...
	I0229 17:39:38.134407    6496 main.go:141] libmachine: Creating VM...
	I0229 17:39:38.134407    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 17:39:40.785526    6496 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 17:39:40.785789    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:39:40.785888    6496 main.go:141] libmachine: Using switch "Default Switch"
	I0229 17:39:40.785979    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 17:39:42.471768    6496 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 17:39:42.471859    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:39:42.471859    6496 main.go:141] libmachine: Creating VHD
	I0229 17:39:42.471938    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 17:39:46.105217    6496 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E00AB63F-24C3-4AF7-9516-AE9C3E8442A6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 17:39:46.105217    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:39:46.105615    6496 main.go:141] libmachine: Writing magic tar header
	I0229 17:39:46.105882    6496 main.go:141] libmachine: Writing SSH key tar header
	I0229 17:39:46.113738    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 17:39:49.160187    6496 main.go:141] libmachine: [stdout =====>] : 
	I0229 17:39:49.160187    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:39:49.161254    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\disk.vhd' -SizeBytes 20000MB
	I0229 17:39:51.521653    6496 main.go:141] libmachine: [stdout =====>] : 
	I0229 17:39:51.521827    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:39:51.521903    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-268800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0229 17:39:54.927763    6496 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-268800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 17:39:54.927763    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:39:54.927763    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-268800 -DynamicMemoryEnabled $false
	I0229 17:39:56.970943    6496 main.go:141] libmachine: [stdout =====>] : 
	I0229 17:39:56.971490    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:39:56.971490    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-268800 -Count 2
	I0229 17:39:58.996416    6496 main.go:141] libmachine: [stdout =====>] : 
	I0229 17:39:58.996500    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:39:58.996569    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-268800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\boot2docker.iso'
	I0229 17:40:01.411396    6496 main.go:141] libmachine: [stdout =====>] : 
	I0229 17:40:01.411396    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:01.411672    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-268800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\disk.vhd'
	I0229 17:40:03.916300    6496 main.go:141] libmachine: [stdout =====>] : 
	I0229 17:40:03.916300    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:03.916300    6496 main.go:141] libmachine: Starting VM...
	I0229 17:40:03.917379    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-268800
	I0229 17:40:06.684251    6496 main.go:141] libmachine: [stdout =====>] : 
	I0229 17:40:06.684251    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:06.684251    6496 main.go:141] libmachine: Waiting for host to start...
	I0229 17:40:06.684951    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:40:08.803191    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:40:08.803367    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:08.803367    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:40:11.193897    6496 main.go:141] libmachine: [stdout =====>] : 
	I0229 17:40:11.193897    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:12.203037    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:40:14.275108    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:40:14.275171    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:14.275171    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:40:16.623261    6496 main.go:141] libmachine: [stdout =====>] : 
	I0229 17:40:16.623261    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:17.638600    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:40:19.692713    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:40:19.692713    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:19.692713    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:40:22.061936    6496 main.go:141] libmachine: [stdout =====>] : 
	I0229 17:40:22.061936    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:23.067798    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:40:25.168989    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:40:25.168989    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:25.169065    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:40:27.534854    6496 main.go:141] libmachine: [stdout =====>] : 
	I0229 17:40:27.534854    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:28.536075    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:40:30.588238    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:40:30.588930    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:30.589085    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:40:33.007483    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:40:33.007483    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:33.008465    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:40:35.039001    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:40:35.039001    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:35.039001    6496 machine.go:88] provisioning docker machine ...
	I0229 17:40:35.039001    6496 buildroot.go:166] provisioning hostname "addons-268800"
	I0229 17:40:35.039001    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:40:37.056873    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:40:37.057124    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:37.057124    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:40:39.473541    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:40:39.473541    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:39.478040    6496 main.go:141] libmachine: Using SSH client type: native
	I0229 17:40:39.487180    6496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.58.180 22 <nil> <nil>}
	I0229 17:40:39.487180    6496 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-268800 && echo "addons-268800" | sudo tee /etc/hostname
	I0229 17:40:39.652911    6496 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-268800
	
	I0229 17:40:39.652976    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:40:41.704039    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:40:41.704039    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:41.704039    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:40:44.127840    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:40:44.128687    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:44.132341    6496 main.go:141] libmachine: Using SSH client type: native
	I0229 17:40:44.133117    6496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.58.180 22 <nil> <nil>}
	I0229 17:40:44.133117    6496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-268800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-268800/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-268800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 17:40:44.297585    6496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 17:40:44.297585    6496 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 17:40:44.297585    6496 buildroot.go:174] setting up certificates
	I0229 17:40:44.297585    6496 provision.go:83] configureAuth start
	I0229 17:40:44.297585    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:40:46.335497    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:40:46.335497    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:46.335497    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:40:48.755818    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:40:48.755818    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:48.755818    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:40:50.734220    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:40:50.734220    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:50.734220    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:40:53.152557    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:40:53.152557    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:53.153374    6496 provision.go:138] copyHostCerts
	I0229 17:40:53.154022    6496 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 17:40:53.156026    6496 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 17:40:53.157723    6496 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 17:40:53.158995    6496 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-268800 san=[172.26.58.180 172.26.58.180 localhost 127.0.0.1 minikube addons-268800]
	I0229 17:40:53.271111    6496 provision.go:172] copyRemoteCerts
	I0229 17:40:53.280110    6496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 17:40:53.280110    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:40:55.302427    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:40:55.302427    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:55.303354    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:40:57.669296    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:40:57.669296    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:57.669926    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:40:57.777240    6496 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4968807s)
	I0229 17:40:57.777836    6496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 17:40:57.824344    6496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 17:40:57.870004    6496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 17:40:57.917223    6496 provision.go:86] duration metric: configureAuth took 13.6187825s
	I0229 17:40:57.917223    6496 buildroot.go:189] setting minikube options for container-runtime
	I0229 17:40:57.917843    6496 config.go:182] Loaded profile config "addons-268800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 17:40:57.918016    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:40:59.925603    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:40:59.925603    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:40:59.925888    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:41:02.374712    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:41:02.374712    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:02.378760    6496 main.go:141] libmachine: Using SSH client type: native
	I0229 17:41:02.379164    6496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.58.180 22 <nil> <nil>}
	I0229 17:41:02.379164    6496 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 17:41:02.520915    6496 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 17:41:02.520915    6496 buildroot.go:70] root file system type: tmpfs
	I0229 17:41:02.521600    6496 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 17:41:02.521747    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:41:04.549497    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:41:04.549646    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:04.549646    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:41:06.984857    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:41:06.985411    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:06.988962    6496 main.go:141] libmachine: Using SSH client type: native
	I0229 17:41:06.989579    6496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.58.180 22 <nil> <nil>}
	I0229 17:41:06.989579    6496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 17:41:07.156846    6496 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 17:41:07.157001    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:41:09.136085    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:41:09.136085    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:09.136085    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:41:11.558918    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:41:11.559117    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:11.564710    6496 main.go:141] libmachine: Using SSH client type: native
	I0229 17:41:11.565468    6496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.58.180 22 <nil> <nil>}
	I0229 17:41:11.565468    6496 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 17:41:12.599315    6496 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 17:41:12.599315    6496 machine.go:91] provisioned docker machine in 37.5582325s
	I0229 17:41:12.599315    6496 client.go:171] LocalClient.Create took 1m44.2328808s
	I0229 17:41:12.599315    6496 start.go:167] duration metric: libmachine.API.Create for "addons-268800" took 1m44.2329686s
	I0229 17:41:12.599315    6496 start.go:300] post-start starting for "addons-268800" (driver="hyperv")
	I0229 17:41:12.599315    6496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 17:41:12.608085    6496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 17:41:12.608085    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:41:14.573555    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:41:14.573555    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:14.573555    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:41:16.956402    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:41:16.956402    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:16.956402    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:41:17.079858    6496 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4714233s)
	I0229 17:41:17.090755    6496 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 17:41:17.097567    6496 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 17:41:17.097669    6496 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 17:41:17.098197    6496 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 17:41:17.098505    6496 start.go:303] post-start completed in 4.4989403s
	I0229 17:41:17.101371    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:41:19.088342    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:41:19.089064    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:19.089270    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:41:21.472332    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:41:21.472332    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:21.472703    6496 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\config.json ...
	I0229 17:41:21.474982    6496 start.go:128] duration metric: createHost completed in 1m53.1094879s
	I0229 17:41:21.475113    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:41:23.463069    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:41:23.463617    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:23.463725    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:41:25.853210    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:41:25.853210    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:25.858608    6496 main.go:141] libmachine: Using SSH client type: native
	I0229 17:41:25.859253    6496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.58.180 22 <nil> <nil>}
	I0229 17:41:25.859253    6496 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 17:41:25.997368    6496 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709228486.164801458
	
	I0229 17:41:25.997368    6496 fix.go:206] guest clock: 1709228486.164801458
	I0229 17:41:25.997368    6496 fix.go:219] Guest: 2024-02-29 17:41:26.164801458 +0000 UTC Remote: 2024-02-29 17:41:21.4750662 +0000 UTC m=+118.433010801 (delta=4.689735258s)
	I0229 17:41:25.997517    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:41:27.993427    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:41:27.993427    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:27.994300    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:41:30.378943    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:41:30.378943    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:30.382466    6496 main.go:141] libmachine: Using SSH client type: native
	I0229 17:41:30.382985    6496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.58.180 22 <nil> <nil>}
	I0229 17:41:30.382985    6496 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709228485
	I0229 17:41:30.540323    6496 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 17:41:25 UTC 2024
	
	I0229 17:41:30.540323    6496 fix.go:226] clock set: Thu Feb 29 17:41:25 UTC 2024
	 (err=<nil>)
	I0229 17:41:30.540323    6496 start.go:83] releasing machines lock for "addons-268800", held for 2m2.1744936s
	I0229 17:41:30.540971    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:41:32.548041    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:41:32.548942    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:32.548942    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:41:34.953711    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:41:34.953711    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:34.956958    6496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 17:41:34.957085    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:41:34.966730    6496 ssh_runner.go:195] Run: cat /version.json
	I0229 17:41:34.966730    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:41:37.011860    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:41:37.012179    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:37.012261    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:41:37.014591    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:41:37.014591    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:37.014680    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:41:39.456084    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:41:39.456084    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:39.456396    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:41:39.480910    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:41:39.481366    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:41:39.481917    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:41:39.614545    6496 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6573278s)
	I0229 17:41:39.614545    6496 ssh_runner.go:235] Completed: cat /version.json: (4.6475565s)
	I0229 17:41:39.624248    6496 ssh_runner.go:195] Run: systemctl --version
	I0229 17:41:39.643005    6496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 17:41:39.651779    6496 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 17:41:39.660784    6496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 17:41:39.688931    6496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 17:41:39.688966    6496 start.go:475] detecting cgroup driver to use...
	I0229 17:41:39.689412    6496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 17:41:39.731632    6496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 17:41:39.759909    6496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 17:41:39.778677    6496 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 17:41:39.786676    6496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 17:41:39.815533    6496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 17:41:39.843779    6496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 17:41:39.874450    6496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 17:41:39.902881    6496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 17:41:39.930599    6496 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 17:41:39.961068    6496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 17:41:39.990004    6496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 17:41:40.016404    6496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:41:40.203671    6496 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 17:41:40.234041    6496 start.go:475] detecting cgroup driver to use...
	I0229 17:41:40.243563    6496 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 17:41:40.276877    6496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 17:41:40.309653    6496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 17:41:40.345875    6496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 17:41:40.378466    6496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 17:41:40.409243    6496 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 17:41:40.462204    6496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 17:41:40.483966    6496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 17:41:40.524776    6496 ssh_runner.go:195] Run: which cri-dockerd
	I0229 17:41:40.538676    6496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 17:41:40.556538    6496 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 17:41:40.596084    6496 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 17:41:40.788751    6496 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 17:41:40.958768    6496 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 17:41:40.959089    6496 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 17:41:40.997418    6496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:41:41.194845    6496 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 17:41:42.677331    6496 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4824038s)
	I0229 17:41:42.685887    6496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 17:41:42.719526    6496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 17:41:42.753042    6496 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 17:41:42.940366    6496 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 17:41:43.129000    6496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:41:43.319938    6496 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 17:41:43.356858    6496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 17:41:43.390421    6496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:41:43.574295    6496 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 17:41:43.668936    6496 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 17:41:43.678553    6496 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 17:41:43.687358    6496 start.go:543] Will wait 60s for crictl version
	I0229 17:41:43.697204    6496 ssh_runner.go:195] Run: which crictl
	I0229 17:41:43.715695    6496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 17:41:43.786972    6496 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 17:41:43.793875    6496 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 17:41:43.834675    6496 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 17:41:43.867749    6496 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 17:41:43.867749    6496 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 17:41:43.872226    6496 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 17:41:43.872226    6496 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 17:41:43.872226    6496 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 17:41:43.872226    6496 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:76:3f:19 Flags:up|broadcast|multicast|running}
	I0229 17:41:43.875632    6496 ip.go:210] interface addr: fe80::841a:4367:8c9:abc/64
	I0229 17:41:43.875632    6496 ip.go:210] interface addr: 172.26.48.1/20
	I0229 17:41:43.887812    6496 ssh_runner.go:195] Run: grep 172.26.48.1	host.minikube.internal$ /etc/hosts
	I0229 17:41:43.895100    6496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 17:41:43.916659    6496 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 17:41:43.924680    6496 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 17:41:43.948879    6496 docker.go:685] Got preloaded images: 
	I0229 17:41:43.948879    6496 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0229 17:41:43.957564    6496 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 17:41:43.984679    6496 ssh_runner.go:195] Run: which lz4
	I0229 17:41:43.999830    6496 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 17:41:44.005831    6496 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 17:41:44.006516    6496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0229 17:41:45.782944    6496 docker.go:649] Took 1.791789 seconds to copy over tarball
	I0229 17:41:45.796559    6496 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 17:41:52.522784    6496 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.7184794s)
	I0229 17:41:52.522821    6496 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 17:41:52.591671    6496 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 17:41:52.609425    6496 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0229 17:41:52.650327    6496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 17:41:52.845303    6496 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 17:41:58.168968    6496 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.3233698s)
	I0229 17:41:58.176291    6496 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 17:41:58.202754    6496 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 17:41:58.202868    6496 cache_images.go:84] Images are preloaded, skipping loading
	I0229 17:41:58.212213    6496 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 17:41:58.246512    6496 cni.go:84] Creating CNI manager for ""
	I0229 17:41:58.246945    6496 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 17:41:58.247016    6496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 17:41:58.247016    6496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.58.180 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-268800 NodeName:addons-268800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.58.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.58.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 17:41:58.247016    6496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.58.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-268800"
	  kubeletExtraArgs:
	    node-ip: 172.26.58.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.58.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 17:41:58.247588    6496 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-268800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.58.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-268800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 17:41:58.258561    6496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 17:41:58.265864    6496 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 17:41:58.286812    6496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 17:41:58.303374    6496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0229 17:41:58.332539    6496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 17:41:58.361780    6496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0229 17:41:58.401091    6496 ssh_runner.go:195] Run: grep 172.26.58.180	control-plane.minikube.internal$ /etc/hosts
	I0229 17:41:58.406746    6496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.58.180	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 17:41:58.426783    6496 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800 for IP: 172.26.58.180
	I0229 17:41:58.426783    6496 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:41:58.427350    6496 certs.go:204] generating minikubeCA CA: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 17:41:58.705464    6496 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt ...
	I0229 17:41:58.705464    6496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt: {Name:mkecc83abf7dbcd2f2b0fd63bac36f2a7fe554cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:41:58.713201    6496 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key ...
	I0229 17:41:58.713201    6496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key: {Name:mk56e2872d5c5070a04729e59e76e7398d15f15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:41:58.714463    6496 certs.go:204] generating proxyClientCA CA: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 17:41:58.934657    6496 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0229 17:41:58.934657    6496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkfcb9723e08b8d76b8a2e73084c13f930548396 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:41:58.943140    6496 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key ...
	I0229 17:41:58.943140    6496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkd23bfd48ce10457a367dee40c81533c5cc7b5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:41:58.944863    6496 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.key
	I0229 17:41:58.946016    6496 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt with IP's: []
	I0229 17:41:59.223721    6496 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt ...
	I0229 17:41:59.223721    6496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: {Name:mk61a2e2622f95623337bc0e8c508138e6711fc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:41:59.230133    6496 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.key ...
	I0229 17:41:59.230133    6496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.key: {Name:mkb1cb1439211a32805dff251f28364df3439e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:41:59.231169    6496 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\apiserver.key.0d618a89
	I0229 17:41:59.232504    6496 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\apiserver.crt.0d618a89 with IP's: [172.26.58.180 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 17:41:59.665376    6496 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\apiserver.crt.0d618a89 ...
	I0229 17:41:59.665376    6496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\apiserver.crt.0d618a89: {Name:mkec5e4c736a6ff22a898dc48d86b42eeae42531 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:41:59.666123    6496 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\apiserver.key.0d618a89 ...
	I0229 17:41:59.666123    6496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\apiserver.key.0d618a89: {Name:mk7fd9febb3aa14f159ba473a1c8a7b4537c9546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:41:59.667423    6496 certs.go:337] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\apiserver.crt.0d618a89 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\apiserver.crt
	I0229 17:41:59.675747    6496 certs.go:341] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\apiserver.key.0d618a89 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\apiserver.key
	I0229 17:41:59.679400    6496 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\proxy-client.key
	I0229 17:41:59.680266    6496 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\proxy-client.crt with IP's: []
	I0229 17:41:59.831614    6496 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\proxy-client.crt ...
	I0229 17:41:59.831614    6496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\proxy-client.crt: {Name:mk98cd61c84fdc6daea1d83ef08f6df32f69c5d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:41:59.835494    6496 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\proxy-client.key ...
	I0229 17:41:59.835494    6496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\proxy-client.key: {Name:mkf3a51f7847c0de80dcf267d758ef5225bf9550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:41:59.842311    6496 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 17:41:59.848455    6496 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 17:41:59.848657    6496 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 17:41:59.848657    6496 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 17:41:59.849287    6496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 17:41:59.895275    6496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 17:41:59.938044    6496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 17:41:59.977579    6496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 17:42:00.023534    6496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 17:42:00.066726    6496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 17:42:00.109826    6496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 17:42:00.157751    6496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 17:42:00.199070    6496 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 17:42:00.240751    6496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 17:42:00.282809    6496 ssh_runner.go:195] Run: openssl version
	I0229 17:42:00.300095    6496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 17:42:00.328263    6496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:42:00.334672    6496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:42:00.343524    6496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 17:42:00.364800    6496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 17:42:00.391242    6496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 17:42:00.397519    6496 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 17:42:00.397519    6496 kubeadm.go:404] StartCluster: {Name:addons-268800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:addons-268800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.26.58.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:42:00.404555    6496 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 17:42:00.436668    6496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 17:42:00.462478    6496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 17:42:00.486431    6496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 17:42:00.503698    6496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 17:42:00.503698    6496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 17:42:00.571907    6496 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 17:42:00.572159    6496 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 17:42:00.751695    6496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 17:42:00.751695    6496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 17:42:00.752498    6496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 17:42:01.127284    6496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 17:42:01.128515    6496 out.go:204]   - Generating certificates and keys ...
	I0229 17:42:01.129375    6496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 17:42:01.129544    6496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 17:42:01.317806    6496 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 17:42:01.523143    6496 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 17:42:01.636540    6496 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 17:42:01.973299    6496 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 17:42:02.116351    6496 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 17:42:02.116486    6496 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-268800 localhost] and IPs [172.26.58.180 127.0.0.1 ::1]
	I0229 17:42:02.772399    6496 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 17:42:02.772905    6496 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-268800 localhost] and IPs [172.26.58.180 127.0.0.1 ::1]
	I0229 17:42:02.968575    6496 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 17:42:03.214145    6496 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 17:42:03.304281    6496 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 17:42:03.304665    6496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 17:42:03.996536    6496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 17:42:04.128335    6496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 17:42:04.201271    6496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 17:42:04.392110    6496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 17:42:04.393039    6496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 17:42:04.396293    6496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 17:42:04.397201    6496 out.go:204]   - Booting up control plane ...
	I0229 17:42:04.397201    6496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 17:42:04.403423    6496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 17:42:04.407145    6496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 17:42:04.435704    6496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 17:42:04.436996    6496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 17:42:04.437344    6496 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 17:42:04.619042    6496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 17:42:11.120897    6496 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.504006 seconds
	I0229 17:42:11.123134    6496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 17:42:11.139667    6496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 17:42:11.695683    6496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 17:42:11.696711    6496 kubeadm.go:322] [mark-control-plane] Marking the node addons-268800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 17:42:12.213151    6496 kubeadm.go:322] [bootstrap-token] Using token: oh3cdh.kfqqbjdm09ft8hpb
	I0229 17:42:12.214303    6496 out.go:204]   - Configuring RBAC rules ...
	I0229 17:42:12.214597    6496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 17:42:12.223390    6496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 17:42:12.235423    6496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 17:42:12.240275    6496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 17:42:12.244480    6496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 17:42:12.244675    6496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 17:42:12.266603    6496 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 17:42:12.584937    6496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 17:42:12.630139    6496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 17:42:12.636762    6496 kubeadm.go:322] 
	I0229 17:42:12.638009    6496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 17:42:12.638009    6496 kubeadm.go:322] 
	I0229 17:42:12.638009    6496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 17:42:12.638009    6496 kubeadm.go:322] 
	I0229 17:42:12.638009    6496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 17:42:12.638009    6496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 17:42:12.638009    6496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 17:42:12.638009    6496 kubeadm.go:322] 
	I0229 17:42:12.638009    6496 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 17:42:12.638009    6496 kubeadm.go:322] 
	I0229 17:42:12.638009    6496 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 17:42:12.638009    6496 kubeadm.go:322] 
	I0229 17:42:12.638009    6496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 17:42:12.638009    6496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 17:42:12.639089    6496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 17:42:12.639089    6496 kubeadm.go:322] 
	I0229 17:42:12.639089    6496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 17:42:12.639089    6496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 17:42:12.639089    6496 kubeadm.go:322] 
	I0229 17:42:12.639769    6496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token oh3cdh.kfqqbjdm09ft8hpb \
	I0229 17:42:12.639962    6496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e \
	I0229 17:42:12.640052    6496 kubeadm.go:322] 	--control-plane 
	I0229 17:42:12.640052    6496 kubeadm.go:322] 
	I0229 17:42:12.640334    6496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 17:42:12.640391    6496 kubeadm.go:322] 
	I0229 17:42:12.640591    6496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token oh3cdh.kfqqbjdm09ft8hpb \
	I0229 17:42:12.640784    6496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e 
	I0229 17:42:12.644193    6496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 17:42:12.644235    6496 cni.go:84] Creating CNI manager for ""
	I0229 17:42:12.644333    6496 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 17:42:12.645301    6496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 17:42:12.655337    6496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 17:42:12.674637    6496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 17:42:12.715797    6496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 17:42:12.725917    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=addons-268800 minikube.k8s.io/updated_at=2024_02_29T17_42_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:12.729642    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:12.793168    6496 ops.go:34] apiserver oom_adj: -16
	I0229 17:42:13.146807    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:13.655394    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:14.156583    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:14.657408    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:15.158746    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:15.646996    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:16.156885    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:16.657449    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:17.154062    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:17.652518    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:18.153868    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:18.658819    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:19.154921    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:19.659112    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:20.155658    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:20.650082    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:21.152844    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:21.656436    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:22.159802    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:22.658486    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:23.154960    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:23.662934    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:24.156199    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:24.661819    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:25.152278    6496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 17:42:25.270910    6496 kubeadm.go:1088] duration metric: took 12.5527876s to wait for elevateKubeSystemPrivileges.
	I0229 17:42:25.271088    6496 kubeadm.go:406] StartCluster complete in 24.8721371s
	I0229 17:42:25.271088    6496 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:42:25.271088    6496 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 17:42:25.272390    6496 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:42:25.275022    6496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 17:42:25.275022    6496 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0229 17:42:25.275545    6496 addons.go:69] Setting ingress=true in profile "addons-268800"
	I0229 17:42:25.275631    6496 addons.go:69] Setting volumesnapshots=true in profile "addons-268800"
	I0229 17:42:25.275631    6496 addons.go:69] Setting ingress-dns=true in profile "addons-268800"
	I0229 17:42:25.275631    6496 addons.go:69] Setting helm-tiller=true in profile "addons-268800"
	I0229 17:42:25.275631    6496 addons.go:234] Setting addon helm-tiller=true in "addons-268800"
	I0229 17:42:25.275631    6496 addons.go:234] Setting addon volumesnapshots=true in "addons-268800"
	I0229 17:42:25.275631    6496 addons.go:234] Setting addon ingress=true in "addons-268800"
	I0229 17:42:25.275631    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:25.275631    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:25.275631    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:25.275631    6496 addons.go:69] Setting inspektor-gadget=true in profile "addons-268800"
	I0229 17:42:25.275631    6496 addons.go:234] Setting addon inspektor-gadget=true in "addons-268800"
	I0229 17:42:25.275631    6496 config.go:182] Loaded profile config "addons-268800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 17:42:25.276176    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:25.275631    6496 addons.go:234] Setting addon ingress-dns=true in "addons-268800"
	I0229 17:42:25.276411    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:25.275631    6496 addons.go:69] Setting cloud-spanner=true in profile "addons-268800"
	I0229 17:42:25.276582    6496 addons.go:234] Setting addon cloud-spanner=true in "addons-268800"
	I0229 17:42:25.275631    6496 addons.go:69] Setting yakd=true in profile "addons-268800"
	I0229 17:42:25.276923    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:25.276962    6496 addons.go:234] Setting addon yakd=true in "addons-268800"
	I0229 17:42:25.277187    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:25.275631    6496 addons.go:69] Setting storage-provisioner=true in profile "addons-268800"
	I0229 17:42:25.275631    6496 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-268800"
	I0229 17:42:25.275631    6496 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-268800"
	I0229 17:42:25.275631    6496 addons.go:69] Setting gcp-auth=true in profile "addons-268800"
	I0229 17:42:25.275631    6496 addons.go:69] Setting metrics-server=true in profile "addons-268800"
	I0229 17:42:25.275592    6496 addons.go:69] Setting default-storageclass=true in profile "addons-268800"
	I0229 17:42:25.275631    6496 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-268800"
	I0229 17:42:25.275631    6496 addons.go:69] Setting registry=true in profile "addons-268800"
	I0229 17:42:25.277916    6496 addons.go:234] Setting addon registry=true in "addons-268800"
	I0229 17:42:25.277916    6496 mustload.go:65] Loading cluster: addons-268800
	I0229 17:42:25.277916    6496 addons.go:234] Setting addon metrics-server=true in "addons-268800"
	I0229 17:42:25.277916    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:25.277916    6496 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-268800"
	I0229 17:42:25.278459    6496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-268800"
	I0229 17:42:25.278459    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:25.278553    6496 config.go:182] Loaded profile config "addons-268800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 17:42:25.278790    6496 addons.go:234] Setting addon storage-provisioner=true in "addons-268800"
	I0229 17:42:25.279021    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:25.279120    6496 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-268800"
	I0229 17:42:25.279120    6496 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-268800"
	I0229 17:42:25.279280    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:25.277916    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:25.280617    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.281895    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.283675    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.283797    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.283974    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.284033    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.284033    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.284033    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.284033    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.284653    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.284653    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.284794    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.285580    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.285580    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.285580    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:25.877683    6496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-268800" context rescaled to 1 replicas
	I0229 17:42:25.877683    6496 start.go:223] Will wait 6m0s for node &{Name: IP:172.26.58.180 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 17:42:25.879028    6496 out.go:177] * Verifying Kubernetes components...
	I0229 17:42:25.893606    6496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.26.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 17:42:25.902321    6496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 17:42:30.615443    6496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.26.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.721575s)
	I0229 17:42:30.615443    6496 start.go:929] {"host.minikube.internal": 172.26.48.1} host record injected into CoreDNS's ConfigMap
	I0229 17:42:30.615443    6496 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (4.7128604s)
	I0229 17:42:30.617644    6496 node_ready.go:35] waiting up to 6m0s for node "addons-268800" to be "Ready" ...
	I0229 17:42:30.631287    6496 node_ready.go:49] node "addons-268800" has status "Ready":"True"
	I0229 17:42:30.631287    6496 node_ready.go:38] duration metric: took 13.6426ms waiting for node "addons-268800" to be "Ready" ...
	I0229 17:42:30.631287    6496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 17:42:30.664410    6496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace to be "Ready" ...
	I0229 17:42:30.766081    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:30.766081    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:30.766081    6496 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0229 17:42:30.778413    6496 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0229 17:42:30.778497    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0229 17:42:30.778831    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:30.901485    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:30.901485    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:30.902146    6496 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.4
	I0229 17:42:30.903853    6496 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0229 17:42:30.903853    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0229 17:42:30.904030    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:30.912649    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:30.912649    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:30.913951    6496 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0229 17:42:30.914739    6496 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0229 17:42:30.914739    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0229 17:42:30.914832    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:30.971646    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:30.971646    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:30.974162    6496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0229 17:42:30.984523    6496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0229 17:42:30.994630    6496 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0229 17:42:31.004062    6496 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0229 17:42:31.004062    6496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0229 17:42:31.013112    6496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0229 17:42:31.016565    6496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0229 17:42:31.016565    6496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0229 17:42:31.016565    6496 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0229 17:42:31.016565    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0229 17:42:31.016565    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:31.018739    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:31.018739    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:31.018739    6496 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0229 17:42:31.021791    6496 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0229 17:42:31.022444    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0229 17:42:31.022444    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:31.067736    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:31.067736    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:31.067736    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:31.083926    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:31.083926    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:31.099975    6496 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0229 17:42:31.083926    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:31.099975    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:31.099975    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:31.099975    6496 out.go:177]   - Using image docker.io/registry:2.8.3
	I0229 17:42:31.099975    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:31.099975    6496 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0229 17:42:31.110507    6496 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0229 17:42:31.110507    6496 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.6
	I0229 17:42:31.099975    6496 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 17:42:31.110507    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0229 17:42:31.110507    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0229 17:42:31.106683    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:31.112203    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:31.112203    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:31.112731    6496 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 17:42:31.114239    6496 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 17:42:31.112731    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:31.114847    6496 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 17:42:31.114847    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0229 17:42:31.114847    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:31.115464    6496 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-268800"
	I0229 17:42:31.115464    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:31.116948    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:31.148738    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:31.148738    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:31.149149    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:31.149149    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:31.152017    6496 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0229 17:42:31.152680    6496 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0229 17:42:31.152680    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0229 17:42:31.152680    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:31.152017    6496 addons.go:234] Setting addon default-storageclass=true in "addons-268800"
	I0229 17:42:31.153272    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:31.156117    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:31.457737    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:31.457804    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:31.459193    6496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 17:42:31.459883    6496 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 17:42:31.459883    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 17:42:31.459883    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:32.848019    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:32.848019    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:32.908884    6496 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0229 17:42:32.908884    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:32.966185    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:33.021172    6496 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0229 17:42:32.967232    6496 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0229 17:42:33.143336    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0229 17:42:33.143336    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:33.191067    6496 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0229 17:42:33.191067    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0229 17:42:33.191067    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:33.242996    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:42:35.403166    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:42:35.966365    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:35.966365    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:35.966365    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:36.031662    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:36.032228    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:36.032287    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:36.059116    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:36.059116    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:36.059116    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:36.111589    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:36.111589    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:36.111589    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:36.205039    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:36.205039    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:36.205039    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:36.306851    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:36.307025    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:36.307025    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:36.328262    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:36.328262    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:36.378356    6496 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0229 17:42:36.379133    6496 out.go:177]   - Using image docker.io/busybox:stable
	I0229 17:42:36.379990    6496 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0229 17:42:36.379990    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0229 17:42:36.379990    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:36.564316    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:36.564316    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:36.564316    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:36.630353    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:36.630353    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:36.630353    6496 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 17:42:36.630353    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 17:42:36.630353    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:36.713647    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:36.713716    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:36.713716    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:36.723713    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:36.723713    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:36.723713    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:36.831806    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:36.831806    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:36.831806    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:37.404197    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:42:38.695943    6496 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0229 17:42:38.696268    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:39.460088    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:39.460088    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:39.460088    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:39.697935    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:42:39.805148    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:39.805148    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:39.805148    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:41.600047    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:41.600047    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:41.600047    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:41.605474    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:41.605474    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:41.605474    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:41.704569    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:42:41.927646    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:41.927646    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:41.932701    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:42.029204    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:42.038049    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:42.038600    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:42.173486    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:42.173486    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:42.173486    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:42.238495    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:42.238495    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:42.240372    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:42.313773    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0229 17:42:42.361080    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:42.361080    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:42.361794    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:42.446799    6496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0229 17:42:42.446799    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0229 17:42:42.486380    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:42.486429    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:42.486763    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:42.585527    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:42.585527    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:42.585825    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:42.634577    6496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0229 17:42:42.634577    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0229 17:42:42.659795    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:42.659852    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:42.660066    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:42.704105    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0229 17:42:42.745800    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:42.745800    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:42.745800    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:42.755773    6496 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0229 17:42:42.755883    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0229 17:42:42.784876    6496 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 17:42:42.784876    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0229 17:42:42.829724    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:42.829782    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:42.830678    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:42.846372    6496 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0229 17:42:42.846372    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0229 17:42:42.850448    6496 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0229 17:42:42.850448    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0229 17:42:42.948278    6496 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0229 17:42:42.948278    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0229 17:42:42.965171    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0229 17:42:43.002686    6496 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0229 17:42:43.002746    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0229 17:42:43.018814    6496 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0229 17:42:43.018814    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0229 17:42:43.045785    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 17:42:43.231209    6496 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0229 17:42:43.231326    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0229 17:42:43.249726    6496 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0229 17:42:43.249726    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0229 17:42:43.268603    6496 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0229 17:42:43.268651    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0229 17:42:43.300441    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0229 17:42:43.338860    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 17:42:43.340539    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 17:42:43.370554    6496 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0229 17:42:43.370554    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0229 17:42:43.456313    6496 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0229 17:42:43.456313    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0229 17:42:43.458647    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:43.458647    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:43.458647    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:43.496342    6496 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0229 17:42:43.496454    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0229 17:42:43.603632    6496 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0229 17:42:43.603695    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0229 17:42:43.640114    6496 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0229 17:42:43.640114    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0229 17:42:43.799409    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0229 17:42:43.821167    6496 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0229 17:42:43.821167    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0229 17:42:43.917425    6496 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0229 17:42:43.917540    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0229 17:42:43.940480    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:43.940480    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:43.940767    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:43.993911    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:43.993911    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:43.994467    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:44.023886    6496 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0229 17:42:44.023886    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0229 17:42:44.085184    6496 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0229 17:42:44.085184    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0229 17:42:44.178207    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:42:44.216131    6496 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0229 17:42:44.216131    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0229 17:42:44.281613    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0229 17:42:44.402301    6496 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0229 17:42:44.402301    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0229 17:42:44.525640    6496 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0229 17:42:44.525640    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0229 17:42:44.573169    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.2592215s)
	I0229 17:42:44.623588    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:44.627926    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:44.627926    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:44.726006    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:44.726006    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:44.728299    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:44.924047    6496 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0229 17:42:44.924176    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0229 17:42:44.956999    6496 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0229 17:42:44.956999    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0229 17:42:44.976389    6496 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0229 17:42:44.976389    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0229 17:42:45.119162    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0229 17:42:45.204013    6496 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0229 17:42:45.204013    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0229 17:42:45.232246    6496 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0229 17:42:45.232289    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0229 17:42:45.353652    6496 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0229 17:42:45.353652    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0229 17:42:45.363618    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 17:42:45.437539    6496 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0229 17:42:45.437609    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0229 17:42:45.541502    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0229 17:42:45.608380    6496 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0229 17:42:45.608380    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0229 17:42:45.671960    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0229 17:42:45.837968    6496 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 17:42:45.837968    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0229 17:42:46.022300    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:46.022300    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:46.022300    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:46.283420    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 17:42:46.485924    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.7816092s)
	I0229 17:42:46.691346    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:42:46.941448    6496 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0229 17:42:47.247957    6496 addons.go:234] Setting addon gcp-auth=true in "addons-268800"
	I0229 17:42:47.248136    6496 host.go:66] Checking if "addons-268800" exists ...
	I0229 17:42:47.249451    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:48.728431    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:42:49.114168    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.1486553s)
	I0229 17:42:49.114168    6496 addons.go:470] Verifying addon metrics-server=true in "addons-268800"
	I0229 17:42:49.415653    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:49.415653    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:49.428497    6496 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0229 17:42:49.428567    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-268800 ).state
	I0229 17:42:51.176969    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:42:51.533716    6496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:42:51.533716    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:51.538173    6496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-268800 ).networkadapters[0]).ipaddresses[0]
	I0229 17:42:53.260051    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:42:53.999727    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.9533349s)
	I0229 17:42:53.999727    6496 addons.go:470] Verifying addon ingress=true in "addons-268800"
	I0229 17:42:53.999727    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.6986934s)
	I0229 17:42:53.999727    6496 addons.go:470] Verifying addon registry=true in "addons-268800"
	I0229 17:42:54.000725    6496 out.go:177] * Verifying ingress addon...
	I0229 17:42:53.999727    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.6602764s)
	I0229 17:42:53.999727    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.6585977s)
	I0229 17:42:54.000261    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.1997526s)
	I0229 17:42:54.000308    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.7181566s)
	I0229 17:42:54.001733    6496 out.go:177] * Verifying registry addon...
	I0229 17:42:54.003371    6496 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0229 17:42:54.003371    6496 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0229 17:42:54.023313    6496 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0229 17:42:54.023313    6496 main.go:141] libmachine: [stdout =====>] : 172.26.58.180
	
	I0229 17:42:54.023408    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:54.023408    6496 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:42:54.024208    6496 sshutil.go:53] new ssh client: &{IP:172.26.58.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-268800\id_rsa Username:docker}
	I0229 17:42:54.024346    6496 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0229 17:42:54.024346    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:42:54.543979    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:42:54.553443    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:55.173211    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:42:55.180601    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:55.527678    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:55.527678    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:42:55.702957    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:42:55.781049    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.4168542s)
	I0229 17:42:55.781049    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.2389791s)
	I0229 17:42:55.781049    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.1085288s)
	I0229 17:42:55.781708    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.4977616s)
	I0229 17:42:55.781751    6496 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.3529022s)
	W0229 17:42:55.781813    6496 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0229 17:42:55.782874    6496 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-268800 service yakd-dashboard -n yakd-dashboard
	
	I0229 17:42:55.781962    6496 retry.go:31] will retry after 364.689409ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0229 17:42:55.781049    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.6612962s)
	I0229 17:42:55.783027    6496 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-268800"
	I0229 17:42:55.782770    6496 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231226-1a7112e06
	I0229 17:42:55.783703    6496 out.go:177] * Verifying csi-hostpath-driver addon...
	I0229 17:42:55.784733    6496 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0229 17:42:55.785394    6496 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0229 17:42:55.785466    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0229 17:42:55.786262    6496 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0229 17:42:55.831693    6496 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0229 17:42:55.831693    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:55.832294    6496 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0229 17:42:55.832294    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	W0229 17:42:55.880294    6496 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I0229 17:42:55.885901    6496 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0229 17:42:55.885901    6496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5447 bytes)
	I0229 17:42:55.957393    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0229 17:42:56.032331    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:42:56.032915    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:56.161796    6496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0229 17:42:56.333555    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:56.616396    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:42:56.629887    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:56.816066    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:57.028081    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:57.036205    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:42:57.303195    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:57.527672    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:42:57.528035    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:57.812743    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:58.069461    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:58.071487    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:42:58.174939    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:42:58.300883    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:58.519673    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:58.527375    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:42:58.788061    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.6261187s)
	I0229 17:42:58.788642    6496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.8310927s)
	I0229 17:42:58.797303    6496 addons.go:470] Verifying addon gcp-auth=true in "addons-268800"
	I0229 17:42:58.798430    6496 out.go:177] * Verifying gcp-auth addon...
	I0229 17:42:58.799860    6496 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0229 17:42:58.814936    6496 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0229 17:42:58.814936    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:58.826143    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:59.027695    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:42:59.028124    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:59.306542    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:42:59.310640    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:59.522184    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:42:59.523700    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:42:59.808393    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:42:59.811834    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:00.021535    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:00.022692    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:00.196557    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:43:00.297366    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:00.318976    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:00.520784    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:00.520784    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:00.812113    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:00.814302    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:01.035433    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:01.036092    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:01.306631    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:01.308683    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:01.513566    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:01.515905    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:01.801337    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:01.806426    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:02.014217    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:02.015048    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:02.307940    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:02.309649    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:02.517494    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:02.519575    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:02.679774    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:43:02.802551    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:02.806148    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:03.026143    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:03.026501    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:03.306103    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:03.311159    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:03.513447    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:03.514244    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:03.804198    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:03.807508    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:04.032669    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:04.033725    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:04.297056    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:04.315288    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:04.517369    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:04.517531    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:04.688497    6496 pod_ready.go:102] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"False"
	I0229 17:43:04.801285    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:04.805351    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:05.019141    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:05.022199    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:05.311553    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:05.312625    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:06.009074    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:06.009074    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:06.009793    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:06.011772    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:06.017668    6496 pod_ready.go:92] pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace has status "Ready":"True"
	I0229 17:43:06.017668    6496 pod_ready.go:81] duration metric: took 35.351298s waiting for pod "coredns-5dd5756b68-j5ggq" in "kube-system" namespace to be "Ready" ...
	I0229 17:43:06.017668    6496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ncw5v" in "kube-system" namespace to be "Ready" ...
	I0229 17:43:06.017668    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:06.020007    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:06.020125    6496 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ncw5v" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ncw5v" not found
	I0229 17:43:06.020125    6496 pod_ready.go:81] duration metric: took 2.456ms waiting for pod "coredns-5dd5756b68-ncw5v" in "kube-system" namespace to be "Ready" ...
	E0229 17:43:06.020125    6496 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ncw5v" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ncw5v" not found
	I0229 17:43:06.020125    6496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-268800" in "kube-system" namespace to be "Ready" ...
	I0229 17:43:06.025747    6496 pod_ready.go:92] pod "etcd-addons-268800" in "kube-system" namespace has status "Ready":"True"
	I0229 17:43:06.025747    6496 pod_ready.go:81] duration metric: took 5.6221ms waiting for pod "etcd-addons-268800" in "kube-system" namespace to be "Ready" ...
	I0229 17:43:06.025747    6496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-268800" in "kube-system" namespace to be "Ready" ...
	I0229 17:43:06.034984    6496 pod_ready.go:92] pod "kube-apiserver-addons-268800" in "kube-system" namespace has status "Ready":"True"
	I0229 17:43:06.034984    6496 pod_ready.go:81] duration metric: took 9.2362ms waiting for pod "kube-apiserver-addons-268800" in "kube-system" namespace to be "Ready" ...
	I0229 17:43:06.034984    6496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-268800" in "kube-system" namespace to be "Ready" ...
	I0229 17:43:06.049828    6496 pod_ready.go:92] pod "kube-controller-manager-addons-268800" in "kube-system" namespace has status "Ready":"True"
	I0229 17:43:06.049828    6496 pod_ready.go:81] duration metric: took 14.8438ms waiting for pod "kube-controller-manager-addons-268800" in "kube-system" namespace to be "Ready" ...
	I0229 17:43:06.049828    6496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9vd5v" in "kube-system" namespace to be "Ready" ...
	I0229 17:43:06.057479    6496 pod_ready.go:92] pod "kube-proxy-9vd5v" in "kube-system" namespace has status "Ready":"True"
	I0229 17:43:06.057526    6496 pod_ready.go:81] duration metric: took 7.697ms waiting for pod "kube-proxy-9vd5v" in "kube-system" namespace to be "Ready" ...
	I0229 17:43:06.057526    6496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-268800" in "kube-system" namespace to be "Ready" ...
	I0229 17:43:06.565887    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:06.566264    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:06.566264    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:06.567753    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:06.569995    6496 pod_ready.go:92] pod "kube-scheduler-addons-268800" in "kube-system" namespace has status "Ready":"True"
	I0229 17:43:06.569995    6496 pod_ready.go:81] duration metric: took 512.4408ms waiting for pod "kube-scheduler-addons-268800" in "kube-system" namespace to be "Ready" ...
	I0229 17:43:06.569995    6496 pod_ready.go:38] duration metric: took 35.936715s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 17:43:06.569995    6496 api_server.go:52] waiting for apiserver process to appear ...
	I0229 17:43:06.581087    6496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 17:43:06.608918    6496 api_server.go:72] duration metric: took 40.7289763s to wait for apiserver process to appear ...
	I0229 17:43:06.608918    6496 api_server.go:88] waiting for apiserver healthz status ...
	I0229 17:43:06.608918    6496 api_server.go:253] Checking apiserver healthz at https://172.26.58.180:8443/healthz ...
	I0229 17:43:06.616349    6496 api_server.go:279] https://172.26.58.180:8443/healthz returned 200:
	ok
	I0229 17:43:06.620008    6496 api_server.go:141] control plane version: v1.28.4
	I0229 17:43:06.620008    6496 api_server.go:131] duration metric: took 11.0893ms to wait for apiserver health ...
	I0229 17:43:06.620008    6496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 17:43:06.632260    6496 system_pods.go:59] 18 kube-system pods found
	I0229 17:43:06.632260    6496 system_pods.go:61] "coredns-5dd5756b68-j5ggq" [d58361ab-dff4-4edd-955a-a0af44f4bf0e] Running
	I0229 17:43:06.632260    6496 system_pods.go:61] "csi-hostpath-attacher-0" [fc6ad230-3d92-465b-9401-8fe4c5dfac8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0229 17:43:06.632260    6496 system_pods.go:61] "csi-hostpath-resizer-0" [670b3939-264e-450a-bf71-5de018658933] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0229 17:43:06.632260    6496 system_pods.go:61] "csi-hostpathplugin-f9pcl" [93196d53-051b-42ec-b196-4d78670f63ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0229 17:43:06.632260    6496 system_pods.go:61] "etcd-addons-268800" [0ba35c5c-719d-4722-a013-a1a1cf04567c] Running
	I0229 17:43:06.632260    6496 system_pods.go:61] "kube-apiserver-addons-268800" [4d730c59-d71b-4d39-82f6-889761a7a1b7] Running
	I0229 17:43:06.632260    6496 system_pods.go:61] "kube-controller-manager-addons-268800" [cdc7227a-e53a-4462-959d-decaf31eeabb] Running
	I0229 17:43:06.632260    6496 system_pods.go:61] "kube-ingress-dns-minikube" [c338a3e8-6223-4612-aff2-e521eb5c0d19] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0229 17:43:06.632260    6496 system_pods.go:61] "kube-proxy-9vd5v" [820ae574-3dfe-493d-8720-f7a3b072fcfd] Running
	I0229 17:43:06.632260    6496 system_pods.go:61] "kube-scheduler-addons-268800" [c31ab502-cd9b-4dc9-a56e-d13a6d8cd871] Running
	I0229 17:43:06.632260    6496 system_pods.go:61] "metrics-server-69cf46c98-tnkzm" [f081da8b-c4f4-4f27-a4d1-de33a4a6dd10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 17:43:06.632260    6496 system_pods.go:61] "nvidia-device-plugin-daemonset-h6jfq" [6c4b400a-48aa-404c-9a94-dc86cfedf0a5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0229 17:43:06.632260    6496 system_pods.go:61] "registry-9gbz6" [272e10c3-bb6b-4c25-8a39-52fef4b920c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0229 17:43:06.632260    6496 system_pods.go:61] "registry-proxy-fls6q" [c7b49716-3ace-4016-a398-caf565d5c035] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0229 17:43:06.632260    6496 system_pods.go:61] "snapshot-controller-58dbcc7b99-47q9p" [6cb1f972-b15e-4469-8210-fab0b744edee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 17:43:06.632260    6496 system_pods.go:61] "snapshot-controller-58dbcc7b99-k2d92" [fe252ffb-ddb9-4a81-aa42-f373270ec139] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 17:43:06.632260    6496 system_pods.go:61] "storage-provisioner" [226113fb-f347-4ab2-aa52-c69feef828d7] Running
	I0229 17:43:06.632260    6496 system_pods.go:61] "tiller-deploy-7b677967b9-n7k8w" [accd94dd-d92d-4fb3-b5f2-99d6bfea5044] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0229 17:43:06.632260    6496 system_pods.go:74] duration metric: took 12.2505ms to wait for pod list to return data ...
	I0229 17:43:06.632260    6496 default_sa.go:34] waiting for default service account to be created ...
	I0229 17:43:06.806783    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:06.810404    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:06.813394    6496 default_sa.go:45] found service account: "default"
	I0229 17:43:06.813447    6496 default_sa.go:55] duration metric: took 181.177ms for default service account to be created ...
	I0229 17:43:06.813447    6496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 17:43:07.028409    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:07.029000    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:07.037760    6496 system_pods.go:86] 18 kube-system pods found
	I0229 17:43:07.037760    6496 system_pods.go:89] "coredns-5dd5756b68-j5ggq" [d58361ab-dff4-4edd-955a-a0af44f4bf0e] Running
	I0229 17:43:07.037760    6496 system_pods.go:89] "csi-hostpath-attacher-0" [fc6ad230-3d92-465b-9401-8fe4c5dfac8e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0229 17:43:07.037760    6496 system_pods.go:89] "csi-hostpath-resizer-0" [670b3939-264e-450a-bf71-5de018658933] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0229 17:43:07.037760    6496 system_pods.go:89] "csi-hostpathplugin-f9pcl" [93196d53-051b-42ec-b196-4d78670f63ef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0229 17:43:07.037760    6496 system_pods.go:89] "etcd-addons-268800" [0ba35c5c-719d-4722-a013-a1a1cf04567c] Running
	I0229 17:43:07.037760    6496 system_pods.go:89] "kube-apiserver-addons-268800" [4d730c59-d71b-4d39-82f6-889761a7a1b7] Running
	I0229 17:43:07.037760    6496 system_pods.go:89] "kube-controller-manager-addons-268800" [cdc7227a-e53a-4462-959d-decaf31eeabb] Running
	I0229 17:43:07.037760    6496 system_pods.go:89] "kube-ingress-dns-minikube" [c338a3e8-6223-4612-aff2-e521eb5c0d19] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0229 17:43:07.037760    6496 system_pods.go:89] "kube-proxy-9vd5v" [820ae574-3dfe-493d-8720-f7a3b072fcfd] Running
	I0229 17:43:07.037760    6496 system_pods.go:89] "kube-scheduler-addons-268800" [c31ab502-cd9b-4dc9-a56e-d13a6d8cd871] Running
	I0229 17:43:07.037760    6496 system_pods.go:89] "metrics-server-69cf46c98-tnkzm" [f081da8b-c4f4-4f27-a4d1-de33a4a6dd10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0229 17:43:07.037760    6496 system_pods.go:89] "nvidia-device-plugin-daemonset-h6jfq" [6c4b400a-48aa-404c-9a94-dc86cfedf0a5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0229 17:43:07.037760    6496 system_pods.go:89] "registry-9gbz6" [272e10c3-bb6b-4c25-8a39-52fef4b920c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0229 17:43:07.037760    6496 system_pods.go:89] "registry-proxy-fls6q" [c7b49716-3ace-4016-a398-caf565d5c035] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0229 17:43:07.037760    6496 system_pods.go:89] "snapshot-controller-58dbcc7b99-47q9p" [6cb1f972-b15e-4469-8210-fab0b744edee] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 17:43:07.037760    6496 system_pods.go:89] "snapshot-controller-58dbcc7b99-k2d92" [fe252ffb-ddb9-4a81-aa42-f373270ec139] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0229 17:43:07.037760    6496 system_pods.go:89] "storage-provisioner" [226113fb-f347-4ab2-aa52-c69feef828d7] Running
	I0229 17:43:07.037760    6496 system_pods.go:89] "tiller-deploy-7b677967b9-n7k8w" [accd94dd-d92d-4fb3-b5f2-99d6bfea5044] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0229 17:43:07.037760    6496 system_pods.go:126] duration metric: took 224.3014ms to wait for k8s-apps to be running ...
	I0229 17:43:07.037760    6496 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 17:43:07.044357    6496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 17:43:07.072142    6496 system_svc.go:56] duration metric: took 34.3797ms WaitForService to wait for kubelet.
	I0229 17:43:07.072142    6496 kubeadm.go:581] duration metric: took 41.1921742s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 17:43:07.072142    6496 node_conditions.go:102] verifying NodePressure condition ...
	I0229 17:43:07.221790    6496 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 17:43:07.221790    6496 node_conditions.go:123] node cpu capacity is 2
	I0229 17:43:07.221790    6496 node_conditions.go:105] duration metric: took 149.6395ms to run NodePressure ...
	I0229 17:43:07.221790    6496 start.go:228] waiting for startup goroutines ...
	I0229 17:43:07.301707    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:07.304821    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:07.529033    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:07.530787    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:07.811029    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:07.811644    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:08.028544    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:08.028825    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:08.296981    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:08.313382    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:08.520044    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:08.520653    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:09.260705    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:09.261304    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:09.263995    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:09.266049    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:09.300718    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:09.304570    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:09.513936    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:09.514636    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:09.806858    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:09.814410    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:10.015249    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:10.018231    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:10.310410    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:10.312777    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:10.524561    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:10.535954    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:10.809399    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:10.810533    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:11.028518    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:11.054043    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:11.300418    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:11.314269    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:11.525507    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:11.525626    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:11.799864    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:11.816872    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:12.018434    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:12.025352    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:12.297575    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:12.318298    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:12.526326    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:12.527279    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:12.803262    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:12.806042    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:13.017260    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:13.017260    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:13.307321    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:13.312801    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:13.534392    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:13.534392    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:13.803450    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:13.806815    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:14.017119    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:14.018124    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:14.295864    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:14.313339    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:14.529316    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:14.529459    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:14.799470    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:14.817793    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:15.028794    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:15.028973    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:15.314736    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:15.316870    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:16.588572    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:16.588845    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:16.609283    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:16.610317    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:16.610406    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:16.610693    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:16.615585    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:16.625528    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:16.797000    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:16.812715    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:17.019752    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:17.021630    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:17.306321    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:17.309490    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:17.513185    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:17.514210    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:17.795976    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:17.807719    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:18.032423    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:18.033022    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:18.303028    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:18.305894    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:18.521211    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:18.521900    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:18.812263    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:18.813846    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:19.029119    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:19.030309    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:19.312464    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:19.318246    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:19.531541    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:19.534551    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:19.806474    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:19.808525    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:20.014268    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:20.015678    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:20.302169    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:20.307840    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:20.528292    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:20.528894    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:20.805442    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:20.811129    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:21.035042    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:21.035500    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:21.299734    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:21.313442    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:21.521193    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:21.521193    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:21.808209    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:21.815012    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:22.027769    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:22.028378    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:22.303187    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:22.307568    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:22.529977    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:22.530772    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:22.809203    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:22.810535    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:23.018026    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:23.018026    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:23.301055    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:23.314031    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:23.521228    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:23.522110    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:23.810362    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:23.815095    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:24.021697    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:24.023954    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:24.305834    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:24.308630    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:24.528259    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:24.528259    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:24.811186    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:24.811378    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:25.019267    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:25.019267    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:25.298537    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:25.312173    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:25.511732    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:25.512405    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:25.804008    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:25.804251    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:26.024424    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:26.026167    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:26.307652    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:26.311649    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:26.526527    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:26.527881    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:26.813020    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:26.814737    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:27.014158    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:27.015610    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:27.301152    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:27.320074    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:27.532022    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:27.533242    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:27.800928    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:27.817118    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:28.022765    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:28.023362    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:28.298309    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:28.314778    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:28.523208    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:28.525749    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:28.798849    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:28.816522    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:29.018373    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:29.019541    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:29.307255    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:29.310346    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:29.532968    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:29.534657    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:29.808136    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:29.809123    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:30.027084    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:30.028143    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:30.309336    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:30.311459    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:30.523883    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:30.524685    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:30.810939    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:30.812827    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:31.018769    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:31.021085    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:31.309757    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:31.313609    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:31.531839    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:31.532337    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:31.799247    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:31.817229    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:32.016878    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:32.017251    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:32.311566    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:32.315919    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:32.532667    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:32.533288    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:32.807703    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:32.809420    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:33.015097    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:33.015701    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:33.308645    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:33.317839    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:33.517488    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:33.520892    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:33.799372    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:33.823606    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:34.022871    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:34.023131    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:34.307623    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:34.310495    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:34.531911    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:34.532817    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:35.118408    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:35.118673    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:35.119424    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:35.119995    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:35.300628    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:35.314232    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:35.529877    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:35.530400    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:35.805204    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:35.809106    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:36.026020    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:36.026957    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:36.299516    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:36.314398    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:36.533729    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:36.551346    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:36.810143    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:36.810973    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:37.027823    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:37.030297    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:37.308644    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:37.311096    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:37.526740    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:37.526939    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:37.808424    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:37.812182    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:38.033295    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:38.054804    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:38.315790    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:38.319836    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:38.565002    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:38.566232    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:38.815393    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:38.818659    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:39.020336    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:39.022691    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:39.297916    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:39.315251    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:39.527828    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:39.527828    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:39.814096    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:39.823918    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:40.030229    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:40.031102    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:40.314577    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:40.317995    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:40.530451    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:40.531571    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:40.981398    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:40.987467    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:41.028102    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:41.031171    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:41.309093    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:41.312321    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:41.515663    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:41.516176    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:41.801069    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:41.813070    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:42.017273    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:42.020498    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:42.324289    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:42.324654    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:42.531444    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:42.531444    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:42.797541    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:42.819562    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:43.018232    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:43.018630    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:43.311085    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:43.323907    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:43.519091    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:43.520980    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:43.816562    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:43.821311    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:44.028919    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:44.030273    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:44.299543    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:44.310190    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:44.532992    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:44.536508    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:44.806832    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:44.812656    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:45.038645    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:45.038645    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:45.314632    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:45.315944    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:45.520901    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:45.520959    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:45.802942    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:45.807564    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:46.015411    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:46.018649    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:46.300641    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:46.316349    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:46.519674    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:46.521564    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:46.805481    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:46.809539    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:47.021439    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:47.024777    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:47.300460    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:47.308904    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:47.531829    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:47.532283    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:47.802179    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:47.827088    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:48.024403    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:48.024866    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:48.313905    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:48.316217    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:48.517245    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:48.517245    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:48.813401    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:48.814755    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:49.015777    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:49.015856    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:49.311402    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:49.314058    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:49.516654    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:49.516708    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:49.809755    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:49.813919    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:50.023648    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:50.024304    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:50.310519    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:50.313736    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:50.522806    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:50.525202    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:50.823749    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:50.827556    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:51.018628    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:51.019065    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:51.311153    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:51.326407    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:51.562744    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:51.562744    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:51.811964    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:51.814891    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:52.016006    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:52.016792    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:52.298824    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:52.324418    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:52.530899    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:52.531043    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:52.796389    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:52.813516    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:53.024489    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:53.024554    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:53.306489    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:53.309175    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:53.534970    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:53.534970    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:53.804769    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:53.813682    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:54.023973    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:54.026305    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:54.304020    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:54.308276    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:54.515737    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:54.519486    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:54.813335    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:54.823663    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:55.058185    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:55.058627    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:55.300479    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:55.311001    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:55.563386    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:55.563801    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:55.811710    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:55.813719    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:56.030315    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:56.030447    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:56.309000    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:56.311807    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:56.530862    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:56.531190    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:56.804790    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:56.809310    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:57.029662    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:57.030265    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:57.305954    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:57.309649    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:57.522193    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:57.525378    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:57.798742    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:57.820104    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:58.026814    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:58.028846    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:58.299659    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:58.311134    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:58.527942    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:58.530424    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:58.812822    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:58.814130    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:59.015750    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:59.018222    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:59.302536    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:59.319269    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:43:59.522312    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:43:59.525434    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:43:59.800602    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:43:59.811342    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:00.025495    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:44:00.026119    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:00.314731    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:00.316996    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:00.527412    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:44:00.528399    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:00.800658    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:00.813426    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:01.020803    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:01.023341    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:44:01.315438    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:01.315609    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:01.514898    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:01.517743    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:44:01.809752    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:01.812430    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:02.032215    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:44:02.034168    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:02.314627    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:02.315984    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:02.525867    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:02.532597    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:44:02.809926    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:02.812903    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:03.020521    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:03.024523    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:44:03.309839    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:03.314193    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:03.531776    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:44:03.532292    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:03.804330    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:03.816202    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:04.030719    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:44:04.031271    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:04.302107    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:04.322965    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:04.522252    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:04.522529    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:44:04.809411    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:04.813366    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:05.015997    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:05.016620    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:44:05.307795    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:05.310286    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:05.530079    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:05.531277    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0229 17:44:05.809754    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:05.812408    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:06.036991    6496 kapi.go:107] duration metric: took 1m12.0296293s to wait for kubernetes.io/minikube-addons=registry ...
	I0229 17:44:06.036991    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:06.321918    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:06.325823    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:06.530758    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:06.812290    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:06.814048    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:07.032112    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:07.303137    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:07.324619    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:07.513729    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:07.803794    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:07.809464    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:08.019531    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:08.322940    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:08.327289    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:08.525050    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:08.807772    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:08.817152    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:09.017367    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:09.301341    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:09.321899    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:09.520018    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:09.802269    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:09.818932    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:10.020922    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:10.310133    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:10.314068    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:10.527018    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:10.807660    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:10.811170    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:11.029533    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:11.315514    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:11.316380    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:11.518622    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:11.816548    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:11.819429    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:12.025698    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:12.307029    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:12.311708    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:12.517341    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:12.804783    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:12.808983    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:13.025263    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:13.311213    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:13.320551    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:13.525907    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:13.810032    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:13.814548    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:14.017323    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:14.300291    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:14.315440    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:14.531058    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:14.807658    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:14.814111    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:15.025602    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:15.308507    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:15.313196    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:15.530928    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:15.808428    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:15.813094    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:16.024038    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:16.304159    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:16.310065    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:16.523093    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:16.807025    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:16.817957    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:17.028734    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:17.302115    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:17.315768    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:17.527047    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:17.811839    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:17.814591    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:18.027649    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:18.303483    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:18.309763    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:18.517738    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:18.808389    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:18.811344    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:19.028105    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:19.299731    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:19.315026    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:19.518034    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:19.912422    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:19.914547    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:20.021835    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:20.309767    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:20.316412    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:20.516057    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:20.815865    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:20.821211    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:21.027447    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:21.304246    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:21.316066    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:21.528549    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:21.801280    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:21.822130    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:22.021996    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:22.303355    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:22.319571    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:22.540119    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:22.810932    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:22.829852    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:23.024630    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:23.314761    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:23.316316    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:23.528371    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:23.810667    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:23.817175    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:24.034523    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:24.320610    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:24.321462    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:24.533860    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:24.813556    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:24.814200    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:25.022137    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:25.304151    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:25.326486    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:25.535445    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:25.803132    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:25.823586    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:26.021488    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:26.318175    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:26.320353    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:26.528668    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:26.812422    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:26.816896    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:27.028246    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:27.314238    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:27.316869    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:27.527218    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:27.863794    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:27.866904    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:28.027441    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:28.305131    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:28.316227    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:28.523589    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:28.803685    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:28.820477    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:29.040088    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:29.302364    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:29.317786    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:29.532114    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:29.817597    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:29.818413    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:30.027304    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:30.303714    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:30.321047    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:30.531082    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:30.822230    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:30.832847    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:31.032431    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:31.309268    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:31.312352    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:31.526774    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:31.815296    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:31.830552    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:32.030213    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:32.316703    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:32.318660    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:32.524438    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:32.956381    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:32.957159    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:33.022006    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:33.309895    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:33.314091    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:33.522865    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:33.806229    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:33.811033    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:34.034500    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:34.299731    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:34.314425    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:34.516824    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:34.821218    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:34.825338    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:35.019841    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:35.307382    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:35.310814    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:35.519712    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:35.806129    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:35.810133    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:36.026572    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:36.311369    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:36.314722    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:36.535637    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:36.809939    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:36.816315    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:37.024685    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:37.306187    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:37.310822    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:37.533157    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:37.817406    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:37.822789    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:38.041236    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:38.312465    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:38.315172    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:38.524478    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:38.808270    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:38.818388    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:39.019378    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:39.301771    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:39.328046    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:39.515842    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:39.806754    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:39.811230    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:40.030325    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:40.316152    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:40.318262    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:40.525561    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:40.803362    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:40.815320    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:41.029207    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:41.314636    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:41.316674    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:41.532973    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:41.816616    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:41.821521    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:42.026460    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:42.306759    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:42.311257    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:42.516267    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:42.806780    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:42.813004    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:43.033880    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:43.312258    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:43.320637    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:43.528842    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:43.807346    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:43.811492    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:44.018355    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:44.313467    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:44.316811    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:44.531906    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:44.809813    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0229 17:44:44.813853    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:45.019703    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:45.310661    6496 kapi.go:107] duration metric: took 1m49.5183317s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0229 17:44:45.314735    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:45.531240    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:45.813508    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:46.024434    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:46.322463    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:46.532356    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:46.827656    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:47.017846    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:47.317514    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:47.522843    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:47.825903    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:48.017933    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:48.317863    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:48.527340    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:48.817380    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:49.023808    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:49.312573    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:49.530477    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:49.828248    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:50.020023    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:50.325505    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:50.531236    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:50.825792    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:51.022215    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:51.315204    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:51.527138    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:51.817529    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:52.027809    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:52.316847    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:52.526034    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:52.825873    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:53.023705    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:53.314891    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:53.518355    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:53.823184    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:54.017534    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:54.321758    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:54.523300    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:54.822264    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:55.019832    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:55.319657    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:55.524245    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:55.826312    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:56.018129    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:56.322607    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:56.524757    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:56.823849    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:57.020840    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:57.322066    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:57.532702    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:57.826325    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:58.021282    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:58.323181    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:58.519239    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:58.817968    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:59.025628    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:59.340678    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:44:59.523133    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:44:59.816582    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:00.027402    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:00.318423    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:00.524943    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:00.823997    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:01.027715    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:01.315291    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:01.532767    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:01.835703    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:02.025601    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:02.318479    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:02.532824    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:02.822696    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:03.032459    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:03.331096    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:03.528511    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:03.823396    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:04.041646    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:04.320917    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:04.531056    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:04.816092    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:05.025862    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:05.324888    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:05.530629    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:05.813626    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:06.023282    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:06.313144    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:06.532937    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:06.814059    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:07.019947    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:07.325764    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:07.523992    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:07.828547    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:08.025013    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:08.314246    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:08.529987    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:08.824138    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:09.028228    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:09.326235    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:09.520805    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:09.823008    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:10.020681    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:10.319547    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:10.529881    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:10.822098    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:11.018498    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:11.313164    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:11.524027    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:11.821399    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:12.025010    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:12.322530    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:12.537421    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:12.820452    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:13.017472    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:13.317120    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:13.518923    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:13.826528    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:14.031667    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:14.323977    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:14.527652    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:14.815636    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:15.018661    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:15.334394    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:15.530673    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:15.827677    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:16.023612    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:16.332458    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:16.517749    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:16.816090    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:17.019876    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:17.329395    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:17.529974    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:17.813387    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:18.024231    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:18.332390    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:18.517424    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:18.825625    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:19.030882    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:19.320612    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:19.533742    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:19.835366    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:20.032219    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:20.313058    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:20.528748    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:20.830706    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:21.035015    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:21.324666    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:21.530454    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:21.813519    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:22.030932    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:22.315856    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:22.578257    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:22.820016    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:23.030671    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:23.323788    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:23.528476    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:23.864804    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:24.033913    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:24.320208    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:24.521742    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:24.814325    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:25.035425    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:25.315794    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:25.528594    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:25.824626    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:26.023631    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:26.328561    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:26.531257    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:26.818274    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:27.029914    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:27.324607    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:27.526946    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:27.821423    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:28.033632    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:28.330381    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:28.518359    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:28.818740    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:29.022045    6496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0229 17:45:29.327783    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:29.537265    6496 kapi.go:107] duration metric: took 2m35.5251825s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0229 17:45:29.822494    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:30.345592    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:30.819422    6496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0229 17:45:31.348584    6496 kapi.go:107] duration metric: took 2m32.5402728s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0229 17:45:31.349718    6496 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-268800 cluster.
	I0229 17:45:31.350620    6496 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0229 17:45:31.351160    6496 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0229 17:45:31.352099    6496 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, metrics-server, storage-provisioner, inspektor-gadget, ingress-dns, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0229 17:45:31.352218    6496 addons.go:505] enable addons completed in 3m6.0668851s: enabled=[nvidia-device-plugin cloud-spanner metrics-server storage-provisioner inspektor-gadget ingress-dns helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0229 17:45:31.352218    6496 start.go:233] waiting for cluster config update ...
	I0229 17:45:31.352218    6496 start.go:242] writing updated cluster config ...
	I0229 17:45:31.361259    6496 ssh_runner.go:195] Run: rm -f paused
	I0229 17:45:31.576120    6496 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 17:45:31.576981    6496 out.go:177] * Done! kubectl is now configured to use "addons-268800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 29 17:46:04 addons-268800 dockerd[1295]: time="2024-02-29T17:46:04.102920305Z" level=warning msg="cleaning up after shim disconnected" id=aa242cd1985a3885fd79d4bacaf5784fdb19374b8b1d512d4ebfd1428f975446 namespace=moby
	Feb 29 17:46:04 addons-268800 dockerd[1295]: time="2024-02-29T17:46:04.103175619Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 17:46:04 addons-268800 dockerd[1289]: time="2024-02-29T17:46:04.104654100Z" level=info msg="ignoring event" container=aa242cd1985a3885fd79d4bacaf5784fdb19374b8b1d512d4ebfd1428f975446 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 17:46:04 addons-268800 dockerd[1295]: time="2024-02-29T17:46:04.175807223Z" level=warning msg="cleanup warnings time=\"2024-02-29T17:46:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Feb 29 17:46:04 addons-268800 dockerd[1295]: time="2024-02-29T17:46:04.387113074Z" level=info msg="shim disconnected" id=f5bf8f76fb64949a18a16ddff6b7761863799356efcf4f40d087b5dbb2b490f7 namespace=moby
	Feb 29 17:46:04 addons-268800 dockerd[1295]: time="2024-02-29T17:46:04.387289383Z" level=warning msg="cleaning up after shim disconnected" id=f5bf8f76fb64949a18a16ddff6b7761863799356efcf4f40d087b5dbb2b490f7 namespace=moby
	Feb 29 17:46:04 addons-268800 dockerd[1295]: time="2024-02-29T17:46:04.387306684Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 17:46:04 addons-268800 dockerd[1289]: time="2024-02-29T17:46:04.388210834Z" level=info msg="ignoring event" container=f5bf8f76fb64949a18a16ddff6b7761863799356efcf4f40d087b5dbb2b490f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 17:46:05 addons-268800 dockerd[1289]: time="2024-02-29T17:46:05.522824171Z" level=info msg="ignoring event" container=27b7fae40cee1c6b0a8f3b6f3a007aaf34b1ade113f79ed2cf574056d7ae515b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 17:46:05 addons-268800 dockerd[1295]: time="2024-02-29T17:46:05.524142343Z" level=info msg="shim disconnected" id=27b7fae40cee1c6b0a8f3b6f3a007aaf34b1ade113f79ed2cf574056d7ae515b namespace=moby
	Feb 29 17:46:05 addons-268800 dockerd[1295]: time="2024-02-29T17:46:05.524196546Z" level=warning msg="cleaning up after shim disconnected" id=27b7fae40cee1c6b0a8f3b6f3a007aaf34b1ade113f79ed2cf574056d7ae515b namespace=moby
	Feb 29 17:46:05 addons-268800 dockerd[1295]: time="2024-02-29T17:46:05.524208847Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 17:46:05 addons-268800 dockerd[1289]: time="2024-02-29T17:46:05.690638053Z" level=info msg="ignoring event" container=008243a8909d8b806e3b52f3e43ba6a1d363d5ed6acd9ae60afcbbf1d324cfcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 17:46:05 addons-268800 dockerd[1295]: time="2024-02-29T17:46:05.692333245Z" level=info msg="shim disconnected" id=008243a8909d8b806e3b52f3e43ba6a1d363d5ed6acd9ae60afcbbf1d324cfcd namespace=moby
	Feb 29 17:46:05 addons-268800 dockerd[1295]: time="2024-02-29T17:46:05.692429551Z" level=warning msg="cleaning up after shim disconnected" id=008243a8909d8b806e3b52f3e43ba6a1d363d5ed6acd9ae60afcbbf1d324cfcd namespace=moby
	Feb 29 17:46:05 addons-268800 dockerd[1295]: time="2024-02-29T17:46:05.692492854Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 17:46:05 addons-268800 dockerd[1295]: time="2024-02-29T17:46:05.721989868Z" level=warning msg="cleanup warnings time=\"2024-02-29T17:46:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Feb 29 17:46:17 addons-268800 dockerd[1295]: time="2024-02-29T17:46:17.613140155Z" level=info msg="shim disconnected" id=b569a2d46175b3994c6797d2ff8d4b7ada38cb1422431f826323c848c6f0c880 namespace=moby
	Feb 29 17:46:17 addons-268800 dockerd[1295]: time="2024-02-29T17:46:17.614781945Z" level=warning msg="cleaning up after shim disconnected" id=b569a2d46175b3994c6797d2ff8d4b7ada38cb1422431f826323c848c6f0c880 namespace=moby
	Feb 29 17:46:17 addons-268800 dockerd[1295]: time="2024-02-29T17:46:17.614938454Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 17:46:17 addons-268800 dockerd[1289]: time="2024-02-29T17:46:17.616196123Z" level=info msg="ignoring event" container=b569a2d46175b3994c6797d2ff8d4b7ada38cb1422431f826323c848c6f0c880 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 17:46:17 addons-268800 dockerd[1289]: time="2024-02-29T17:46:17.808482640Z" level=info msg="ignoring event" container=1c2d62783d621d092027bac4f5c07f7f1684bd291a754196f3f77933ba36a50d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 17:46:17 addons-268800 dockerd[1295]: time="2024-02-29T17:46:17.809057771Z" level=info msg="shim disconnected" id=1c2d62783d621d092027bac4f5c07f7f1684bd291a754196f3f77933ba36a50d namespace=moby
	Feb 29 17:46:17 addons-268800 dockerd[1295]: time="2024-02-29T17:46:17.809110174Z" level=warning msg="cleaning up after shim disconnected" id=1c2d62783d621d092027bac4f5c07f7f1684bd291a754196f3f77933ba36a50d namespace=moby
	Feb 29 17:46:17 addons-268800 dockerd[1295]: time="2024-02-29T17:46:17.809121275Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	d0ad59b1d88d9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:efddd4f0a8b51a7c406c67894203bc475198f54809105ce0c2df904a44180e75                            53 seconds ago       Exited              gadget                                   3                   ae4b8835783f5       gadget-dfbhw
	cadb6a01e8793       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:01b0de782aa30e7fc91ac5a91b5cc35e95e9679dee7ef07af06457b471f88f32                                 53 seconds ago       Running             gcp-auth                                 0                   ac13feae29c98       gcp-auth-5f6b4f85fd-c54xv
	8ddfb70d2d215       registry.k8s.io/ingress-nginx/controller@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c                             57 seconds ago       Running             controller                               0                   199df61d1f5c4       ingress-nginx-controller-7967645744-ckd55
	faabd4be57837       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   a6931a59954dc       csi-hostpathplugin-f9pcl
	e84ebdf1a9466       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   a6931a59954dc       csi-hostpathplugin-f9pcl
	478dc97864846       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   a6931a59954dc       csi-hostpathplugin-f9pcl
	a45e0fa13e47d       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   a6931a59954dc       csi-hostpathplugin-f9pcl
	bf2ffb70d3491       eb825d2bb76b9                                                                                                                                About a minute ago   Exited              patch                                    2                   dbed749a7de20       ingress-nginx-admission-patch-77vlx
	665e59994c7d1       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   a6931a59954dc       csi-hostpathplugin-f9pcl
	e587c79988c05       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   ed8a51fd72655       csi-hostpath-resizer-0
	3171c1ef9d422       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             About a minute ago   Running             csi-attacher                             0                   51300e46d03cf       csi-hostpath-attacher-0
	c8bac6915bdb4       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   a6931a59954dc       csi-hostpathplugin-f9pcl
	de894c0d23b8e       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   21979e96d0142       snapshot-controller-58dbcc7b99-k2d92
	2271ea3934cd5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:25d6a5f11211cc5c3f9f2bf552b585374af287b4debf693cacbe2da47daa5084                   2 minutes ago        Exited              create                                   0                   b9d5d1abb6165       ingress-nginx-admission-create-rpxr5
	79cb71c8d2b74       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       2 minutes ago        Running             local-path-provisioner                   0                   0d879db7cc40f       local-path-provisioner-78b46b4d5c-skhh4
	6b2d4500f745c       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   2e78e9d581fdf       snapshot-controller-58dbcc7b99-47q9p
	666c3a7d8260b       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   7bf07e83cde89       yakd-dashboard-9947fc6bf-ntpzm
	8ce2a64d9105d       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  2 minutes ago        Running             tiller                                   0                   4de9faf56165f       tiller-deploy-7b677967b9-n7k8w
	e27dca89b6978       registry.k8s.io/metrics-server/metrics-server@sha256:1c0419326500f1704af580d12a579671b2c3a06a8aa918cd61d0a35fb2d6b3ce                        2 minutes ago        Running             metrics-server                           0                   bfc1750adff3b       metrics-server-69cf46c98-tnkzm
	107e349639291       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   869bc500d094b       kube-ingress-dns-minikube
	863dc2cc12052       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   d566167e3da16       storage-provisioner
	61bdd13edb61c       ead0a4a53df89                                                                                                                                3 minutes ago        Running             coredns                                  0                   d39d87fa3d8d8       coredns-5dd5756b68-j5ggq
	bb5600ab77ce2       83f6cc407eed8                                                                                                                                3 minutes ago        Running             kube-proxy                               0                   400c86c4d17e3       kube-proxy-9vd5v
	1e202e3d5c732       73deb9a3f7025                                                                                                                                4 minutes ago        Running             etcd                                     0                   3c56df8e21aa9       etcd-addons-268800
	58b5b45ddbb80       e3db313c6dbc0                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   85fe886482673       kube-scheduler-addons-268800
	6d44928218a87       d058aa5ab969c                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   9b5bc0610618b       kube-controller-manager-addons-268800
	4175af0fa1880       7fe0e6f37db33                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   158600f4001fa       kube-apiserver-addons-268800
	
	
	==> controller_ingress [8ddfb70d2d21] <==
	  Build:         6a73aa3b05040a97ef8213675a16142a9c95952a
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.21.6
	
	-------------------------------------------------------------------------------
	
	W0229 17:45:28.294233       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0229 17:45:28.294394       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0229 17:45:28.307634       7 main.go:249] "Running in Kubernetes cluster" major="1" minor="28" git="v1.28.4" state="clean" commit="bae2c62678db2b5053817bc97181fcc2e8388103" platform="linux/amd64"
	I0229 17:45:29.083363       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0229 17:45:29.112859       7 ssl.go:536] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0229 17:45:29.136160       7 nginx.go:260] "Starting NGINX Ingress controller"
	I0229 17:45:29.156009       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a2d2fabe-0850-408c-9966-19cb0b5c8f73", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0229 17:45:29.165888       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"ff182be4-a2af-471e-9d6a-ad417d4e2f65", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0229 17:45:29.166007       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"0250d7b2-456a-4edd-a508-36465e5de103", APIVersion:"v1", ResourceVersion:"711", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0229 17:45:30.338378       7 nginx.go:303] "Starting NGINX process"
	I0229 17:45:30.338699       7 leaderelection.go:245] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0229 17:45:30.339358       7 nginx.go:323] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0229 17:45:30.339885       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0229 17:45:30.356947       7 leaderelection.go:255] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0229 17:45:30.357086       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-7967645744-ckd55"
	I0229 17:45:30.365217       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-7967645744-ckd55" node="addons-268800"
	I0229 17:45:30.552498       7 controller.go:210] "Backend successfully reloaded"
	I0229 17:45:30.552591       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0229 17:45:30.552626       7 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7967645744-ckd55", UID:"ef6d700c-09fb-4231-b9a0-1dbb29fe9fe0", APIVersion:"v1", ResourceVersion:"734", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [61bdd13edb61] <==
	[INFO] plugin/reload: Running configuration SHA512 = 09f0998677e0c19d72433bdbc19471218bfe4a8b92405418740861874d1549e73cec4df8f6750d3139464010abec770181315be2b4c8b222ced8b0f05062ec0c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:38713 - 59473 "HINFO IN 1467563405355709719.4446283164741695315. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.055600449s
	[INFO] 10.244.0.9:35282 - 25178 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000277115s
	[INFO] 10.244.0.9:35282 - 39763 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000342319s
	[INFO] 10.244.0.9:58835 - 61970 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000129607s
	[INFO] 10.244.0.9:58835 - 37137 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000303716s
	[INFO] 10.244.0.9:53357 - 55335 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101506s
	[INFO] 10.244.0.9:53357 - 7972 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000243614s
	[INFO] 10.244.0.9:59621 - 47342 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00019421s
	[INFO] 10.244.0.9:59621 - 41704 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000415923s
	[INFO] 10.244.0.9:35401 - 10819 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000078605s
	[INFO] 10.244.0.9:44411 - 33507 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067704s
	[INFO] 10.244.0.9:59341 - 53509 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061603s
	[INFO] 10.244.0.9:59426 - 27926 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000129807s
	[INFO] 10.244.0.22:45224 - 39162 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000268815s
	[INFO] 10.244.0.22:59379 - 12653 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000131607s
	[INFO] 10.244.0.22:36630 - 22485 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113407s
	[INFO] 10.244.0.22:50417 - 41694 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000077704s
	[INFO] 10.244.0.22:46401 - 39537 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000083404s
	[INFO] 10.244.0.22:56028 - 44375 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000057203s
	[INFO] 10.244.0.22:41168 - 15176 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.007639027s
	[INFO] 10.244.0.22:42429 - 6695 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.007911643s
	[INFO] 10.244.0.25:50416 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000403022s
	[INFO] 10.244.0.25:53167 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00018711s
	
	
	==> describe nodes <==
	Name:               addons-268800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-268800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=addons-268800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T17_42_12_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-268800
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-268800"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 17:42:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-268800
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 17:46:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 17:46:18 +0000   Thu, 29 Feb 2024 17:42:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 17:46:18 +0000   Thu, 29 Feb 2024 17:42:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 17:46:18 +0000   Thu, 29 Feb 2024 17:42:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 17:46:18 +0000   Thu, 29 Feb 2024 17:42:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.58.180
	  Hostname:    addons-268800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	System Info:
	  Machine ID:                 62a1b131bda844228556ce2c016a03d1
	  System UUID:                cf9449fa-4882-3241-b224-6ed0b78f1c87
	  Boot ID:                    16870ff0-6758-48b4-b2f8-e0b19ce93523
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-dfbhw                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  gcp-auth                    gcp-auth-5f6b4f85fd-c54xv                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  ingress-nginx               ingress-nginx-controller-7967645744-ckd55    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m30s
	  kube-system                 coredns-5dd5756b68-j5ggq                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m58s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 csi-hostpathplugin-f9pcl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 etcd-addons-268800                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m12s
	  kube-system                 kube-apiserver-addons-268800                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-controller-manager-addons-268800        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-proxy-9vd5v                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-scheduler-addons-268800                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 metrics-server-69cf46c98-tnkzm               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         3m35s
	  kube-system                 snapshot-controller-58dbcc7b99-47q9p         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 snapshot-controller-58dbcc7b99-k2d92         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 tiller-deploy-7b677967b9-n7k8w               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  local-path-storage          local-path-provisioner-78b46b4d5c-skhh4      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-ntpzm               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m18s (x8 over 4m18s)  kubelet          Node addons-268800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s (x8 over 4m18s)  kubelet          Node addons-268800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s (x7 over 4m18s)  kubelet          Node addons-268800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s                  kubelet          Node addons-268800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s                  kubelet          Node addons-268800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s                  kubelet          Node addons-268800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m10s                  kubelet          Node addons-268800 status is now: NodeReady
	  Normal  RegisteredNode           3m59s                  node-controller  Node addons-268800 event: Registered Node addons-268800 in Controller
	
	
	==> dmesg <==
	[Feb29 17:42] systemd-fstab-generator[1647]: Ignoring "noauto" option for root device
	[  +0.094619] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.771129] systemd-fstab-generator[2580]: Ignoring "noauto" option for root device
	[  +0.121130] kauditd_printk_skb: 62 callbacks suppressed
	[ +18.046349] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.004848] kauditd_printk_skb: 44 callbacks suppressed
	[ +10.491982] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.131380] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.019763] kauditd_printk_skb: 98 callbacks suppressed
	[Feb29 17:43] kauditd_printk_skb: 66 callbacks suppressed
	[ +35.262700] kauditd_printk_skb: 2 callbacks suppressed
	[Feb29 17:44] kauditd_printk_skb: 33 callbacks suppressed
	[ +15.206867] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.284943] kauditd_printk_skb: 23 callbacks suppressed
	[ +10.736656] kauditd_printk_skb: 58 callbacks suppressed
	[  +5.487635] kauditd_printk_skb: 13 callbacks suppressed
	[Feb29 17:45] kauditd_printk_skb: 24 callbacks suppressed
	[ +16.035263] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.007210] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.754835] kauditd_printk_skb: 4 callbacks suppressed
	[  +3.103120] hrtimer: interrupt took 443924 ns
	[  +2.210995] kauditd_printk_skb: 35 callbacks suppressed
	[ +11.941628] kauditd_printk_skb: 1 callbacks suppressed
	[Feb29 17:46] kauditd_printk_skb: 22 callbacks suppressed
	[ +13.985642] kauditd_printk_skb: 16 callbacks suppressed
	
	
	==> etcd [1e202e3d5c73] <==
	{"level":"warn","ts":"2024-02-29T17:43:35.280497Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.263431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81674"}
	{"level":"info","ts":"2024-02-29T17:43:35.280617Z","caller":"traceutil/trace.go:171","msg":"trace[925453304] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:947; }","duration":"289.385638ms","start":"2024-02-29T17:43:34.991223Z","end":"2024-02-29T17:43:35.280609Z","steps":["trace[925453304] 'agreement among raft nodes before linearized reading'  (duration: 289.127024ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T17:43:41.145758Z","caller":"traceutil/trace.go:171","msg":"trace[1334199928] linearizableReadLoop","detail":"{readStateIndex:1004; appliedIndex:1003; }","duration":"168.765833ms","start":"2024-02-29T17:43:40.976973Z","end":"2024-02-29T17:43:41.145739Z","steps":["trace[1334199928] 'read index received'  (duration: 166.266597ms)","trace[1334199928] 'applied index is now lower than readState.Index'  (duration: 2.498336ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T17:43:41.147691Z","caller":"traceutil/trace.go:171","msg":"trace[1892093709] transaction","detail":"{read_only:false; response_revision:967; number_of_response:1; }","duration":"192.653741ms","start":"2024-02-29T17:43:40.95502Z","end":"2024-02-29T17:43:41.147674Z","steps":["trace[1892093709] 'process raft request'  (duration: 188.292003ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:43:41.14885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.161465ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10577"}
	{"level":"info","ts":"2024-02-29T17:43:41.149047Z","caller":"traceutil/trace.go:171","msg":"trace[729678850] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:967; }","duration":"171.947808ms","start":"2024-02-29T17:43:40.976946Z","end":"2024-02-29T17:43:41.148894Z","steps":["trace[729678850] 'agreement among raft nodes before linearized reading'  (duration: 171.034758ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:43:41.149746Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.660347ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81786"}
	{"level":"info","ts":"2024-02-29T17:43:41.149953Z","caller":"traceutil/trace.go:171","msg":"trace[86700043] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:967; }","duration":"172.874358ms","start":"2024-02-29T17:43:40.97707Z","end":"2024-02-29T17:43:41.149944Z","steps":["trace[86700043] 'agreement among raft nodes before linearized reading'  (duration: 172.546841ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:43:49.67986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.506416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-02-29T17:43:49.680189Z","caller":"traceutil/trace.go:171","msg":"trace[567217371] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:0; response_revision:985; }","duration":"104.845734ms","start":"2024-02-29T17:43:49.575329Z","end":"2024-02-29T17:43:49.680175Z","steps":["trace[567217371] 'count revisions from in-memory index tree'  (duration: 104.420811ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:44:20.078379Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.256956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82179"}
	{"level":"info","ts":"2024-02-29T17:44:20.078595Z","caller":"traceutil/trace.go:171","msg":"trace[1424340138] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1050; }","duration":"107.48827ms","start":"2024-02-29T17:44:19.971089Z","end":"2024-02-29T17:44:20.078577Z","steps":["trace[1424340138] 'range keys from in-memory index tree'  (duration: 106.879236ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T17:44:28.029641Z","caller":"traceutil/trace.go:171","msg":"trace[2055696229] transaction","detail":"{read_only:false; response_revision:1098; number_of_response:1; }","duration":"115.35161ms","start":"2024-02-29T17:44:27.914269Z","end":"2024-02-29T17:44:28.029621Z","steps":["trace[2055696229] 'process raft request'  (duration: 114.930987ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T17:44:33.092083Z","caller":"traceutil/trace.go:171","msg":"trace[1641934088] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"201.136026ms","start":"2024-02-29T17:44:32.890931Z","end":"2024-02-29T17:44:33.092067Z","steps":["trace[1641934088] 'process raft request'  (duration: 200.813708ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T17:44:33.096003Z","caller":"traceutil/trace.go:171","msg":"trace[907349004] linearizableReadLoop","detail":"{readStateIndex:1176; appliedIndex:1174; }","duration":"119.169244ms","start":"2024-02-29T17:44:32.976819Z","end":"2024-02-29T17:44:33.095989Z","steps":["trace[907349004] 'read index received'  (duration: 114.904209ms)","trace[907349004] 'applied index is now lower than readState.Index'  (duration: 4.264335ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T17:44:33.096175Z","caller":"traceutil/trace.go:171","msg":"trace[1442962403] transaction","detail":"{read_only:false; response_revision:1127; number_of_response:1; }","duration":"184.517318ms","start":"2024-02-29T17:44:32.911649Z","end":"2024-02-29T17:44:33.096166Z","steps":["trace[1442962403] 'process raft request'  (duration: 184.249504ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:44:33.099148Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.411922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82513"}
	{"level":"info","ts":"2024-02-29T17:44:33.099433Z","caller":"traceutil/trace.go:171","msg":"trace[2097087647] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1127; }","duration":"122.462525ms","start":"2024-02-29T17:44:32.976713Z","end":"2024-02-29T17:44:33.099176Z","steps":["trace[2097087647] 'agreement among raft nodes before linearized reading'  (duration: 119.672371ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T17:44:33.099695Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.735165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11169"}
	{"level":"info","ts":"2024-02-29T17:44:33.099719Z","caller":"traceutil/trace.go:171","msg":"trace[1716119319] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1127; }","duration":"106.792568ms","start":"2024-02-29T17:44:32.992918Z","end":"2024-02-29T17:44:33.099711Z","steps":["trace[1716119319] 'agreement among raft nodes before linearized reading'  (duration: 106.700763ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T17:45:30.681212Z","caller":"traceutil/trace.go:171","msg":"trace[537609605] linearizableReadLoop","detail":"{readStateIndex:1360; appliedIndex:1359; }","duration":"101.684394ms","start":"2024-02-29T17:45:30.579482Z","end":"2024-02-29T17:45:30.681167Z","steps":["trace[537609605] 'read index received'  (duration: 100.854348ms)","trace[537609605] 'applied index is now lower than readState.Index'  (duration: 829.046µs)"],"step_count":2}
	{"level":"warn","ts":"2024-02-29T17:45:30.681483Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.947309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-02-29T17:45:30.68152Z","caller":"traceutil/trace.go:171","msg":"trace[708310272] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1297; }","duration":"102.093418ms","start":"2024-02-29T17:45:30.579416Z","end":"2024-02-29T17:45:30.681509Z","steps":["trace[708310272] 'agreement among raft nodes before linearized reading'  (duration: 101.854504ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T17:45:30.68182Z","caller":"traceutil/trace.go:171","msg":"trace[708319117] transaction","detail":"{read_only:false; response_revision:1297; number_of_response:1; }","duration":"126.623191ms","start":"2024-02-29T17:45:30.555185Z","end":"2024-02-29T17:45:30.681808Z","steps":["trace[708319117] 'process raft request'  (duration: 125.34782ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T17:45:55.143316Z","caller":"traceutil/trace.go:171","msg":"trace[1154037333] transaction","detail":"{read_only:false; response_revision:1429; number_of_response:1; }","duration":"211.512949ms","start":"2024-02-29T17:45:54.931786Z","end":"2024-02-29T17:45:55.143299Z","steps":["trace[1154037333] 'process raft request'  (duration: 211.405743ms)"],"step_count":1}
	
	
	==> gcp-auth [cadb6a01e879] <==
	2024/02/29 17:45:31 GCP Auth Webhook started!
	2024/02/29 17:45:32 Ready to marshal response ...
	2024/02/29 17:45:32 Ready to write response ...
	2024/02/29 17:45:32 Ready to marshal response ...
	2024/02/29 17:45:32 Ready to write response ...
	2024/02/29 17:45:42 Ready to marshal response ...
	2024/02/29 17:45:42 Ready to write response ...
	2024/02/29 17:45:42 Ready to marshal response ...
	2024/02/29 17:45:42 Ready to write response ...
	2024/02/29 17:45:54 Ready to marshal response ...
	2024/02/29 17:45:54 Ready to write response ...
	
	
	==> kernel <==
	 17:46:23 up 6 min,  0 users,  load average: 2.01, 1.68, 0.77
	Linux addons-268800 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4175af0fa188] <==
	I0229 17:43:09.239189       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 17:43:16.756389       1 trace.go:236] Trace[1137023898]: "List" accept:application/json, */*,audit-id:583940b6-9088-45eb-9be8-4e3937df2514,client:172.26.48.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (29-Feb-2024 17:43:15.688) (total time: 1067ms):
	Trace[1137023898]: ["List(recursive=true) etcd3" audit-id:583940b6-9088-45eb-9be8-4e3937df2514,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 1067ms (17:43:15.688)]
	Trace[1137023898]: [1.067500687s] [1.067500687s] END
	I0229 17:43:16.756707       1 trace.go:236] Trace[1954357225]: "List" accept:application/json, */*,audit-id:4a39a51e-6dcb-4914-8cac-891978efb5e5,client:172.26.48.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (29-Feb-2024 17:43:15.689) (total time: 1067ms):
	Trace[1954357225]: ["List(recursive=true) etcd3" audit-id:4a39a51e-6dcb-4914-8cac-891978efb5e5,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 1067ms (17:43:15.689)]
	Trace[1954357225]: [1.067587591s] [1.067587591s] END
	I0229 17:43:16.778600       1 trace.go:236] Trace[797624750]: "List" accept:application/json, */*,audit-id:87caaf8c-ea62-4945-8046-0ec989fd7cf5,client:172.26.48.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (29-Feb-2024 17:43:15.975) (total time: 802ms):
	Trace[797624750]: ["List(recursive=true) etcd3" audit-id:87caaf8c-ea62-4945-8046-0ec989fd7cf5,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 802ms (17:43:15.975)]
	Trace[797624750]: [802.870188ms] [802.870188ms] END
	I0229 17:43:16.778850       1 trace.go:236] Trace[938735888]: "List" accept:application/json, */*,audit-id:5aef3977-0146-4605-a3cd-a17568132f64,client:172.26.48.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (29-Feb-2024 17:43:15.975) (total time: 803ms):
	Trace[938735888]: ["List(recursive=true) etcd3" audit-id:5aef3977-0146-4605-a3cd-a17568132f64,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 803ms (17:43:15.975)]
	Trace[938735888]: [803.152503ms] [803.152503ms] END
	W0229 17:43:40.882147       1 handler_proxy.go:93] no RequestInfo found in the context
	E0229 17:43:40.882468       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0229 17:43:40.882562       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.117.221:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.117.221:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.117.221:443: connect: connection refused
	I0229 17:43:40.883255       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0229 17:43:40.885651       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.117.221:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.117.221:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.117.221:443: connect: connection refused
	E0229 17:43:40.890517       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.117.221:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.117.221:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.117.221:443: connect: connection refused
	I0229 17:43:41.187941       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 17:44:09.234510       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 17:45:09.235042       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0229 17:46:03.961735       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0229 17:46:09.235709       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [6d44928218a8] <==
	I0229 17:44:39.000775       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0229 17:44:39.012431       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0229 17:44:39.012680       1 event.go:307] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0229 17:44:39.112040       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0229 17:44:49.126297       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="8.865481ms"
	I0229 17:44:49.126402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="75.704µs"
	I0229 17:45:04.019910       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0229 17:45:04.023093       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0229 17:45:04.068035       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0229 17:45:04.070902       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0229 17:45:29.360806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="105.306µs"
	I0229 17:45:31.509119       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-5f6b4f85fd" duration="41.359916ms"
	I0229 17:45:31.509481       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-5f6b4f85fd" duration="124.507µs"
	I0229 17:45:32.152948       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0229 17:45:32.235698       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 17:45:32.529335       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 17:45:39.297164       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 17:45:41.748258       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 17:45:42.116307       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="24.04769ms"
	I0229 17:45:42.117827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7967645744" duration="60.003µs"
	I0229 17:46:03.507391       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="14.101µs"
	I0229 17:46:07.134919       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 17:46:09.297777       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0229 17:46:10.815981       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="5.3µs"
	I0229 17:46:17.540406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-6548d5df46" duration="6.1µs"
	
	
	==> kube-proxy [bb5600ab77ce] <==
	I0229 17:42:34.751678       1 server_others.go:69] "Using iptables proxy"
	I0229 17:42:34.864138       1 node.go:141] Successfully retrieved node IP: 172.26.58.180
	I0229 17:42:35.040196       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 17:42:35.040559       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 17:42:35.122124       1 server_others.go:152] "Using iptables Proxier"
	I0229 17:42:35.122202       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 17:42:35.123757       1 server.go:846] "Version info" version="v1.28.4"
	I0229 17:42:35.124002       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 17:42:35.126302       1 config.go:188] "Starting service config controller"
	I0229 17:42:35.126542       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 17:42:35.126677       1 config.go:97] "Starting endpoint slice config controller"
	I0229 17:42:35.126777       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 17:42:35.140237       1 config.go:315] "Starting node config controller"
	I0229 17:42:35.140292       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 17:42:35.226836       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 17:42:35.226915       1 shared_informer.go:318] Caches are synced for service config
	I0229 17:42:35.250970       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [58b5b45ddbb8] <==
	W0229 17:42:10.215666       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 17:42:10.215763       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 17:42:10.395549       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 17:42:10.395685       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 17:42:10.410297       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 17:42:10.410515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 17:42:10.431222       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 17:42:10.431835       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 17:42:10.538176       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 17:42:10.538545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 17:42:10.618647       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 17:42:10.618941       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 17:42:10.638304       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 17:42:10.638352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 17:42:10.660533       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 17:42:10.661798       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 17:42:10.710218       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 17:42:10.710508       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0229 17:42:10.718565       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 17:42:10.718802       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 17:42:10.796480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 17:42:10.796645       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 17:42:10.796880       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 17:42:10.797419       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0229 17:42:13.182086       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 17:46:06 addons-268800 kubelet[2599]: I0229 17:46:06.497135    2599 scope.go:117] "RemoveContainer" containerID="27b7fae40cee1c6b0a8f3b6f3a007aaf34b1ade113f79ed2cf574056d7ae515b"
	Feb 29 17:46:06 addons-268800 kubelet[2599]: E0229 17:46:06.507076    2599 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 27b7fae40cee1c6b0a8f3b6f3a007aaf34b1ade113f79ed2cf574056d7ae515b" containerID="27b7fae40cee1c6b0a8f3b6f3a007aaf34b1ade113f79ed2cf574056d7ae515b"
	Feb 29 17:46:06 addons-268800 kubelet[2599]: I0229 17:46:06.507302    2599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"27b7fae40cee1c6b0a8f3b6f3a007aaf34b1ade113f79ed2cf574056d7ae515b"} err="failed to get container status \"27b7fae40cee1c6b0a8f3b6f3a007aaf34b1ade113f79ed2cf574056d7ae515b\": rpc error: code = Unknown desc = Error response from daemon: No such container: 27b7fae40cee1c6b0a8f3b6f3a007aaf34b1ade113f79ed2cf574056d7ae515b"
	Feb 29 17:46:07 addons-268800 kubelet[2599]: I0229 17:46:07.072084    2599 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="272e10c3-bb6b-4c25-8a39-52fef4b920c7" path="/var/lib/kubelet/pods/272e10c3-bb6b-4c25-8a39-52fef4b920c7/volumes"
	Feb 29 17:46:07 addons-268800 kubelet[2599]: I0229 17:46:07.072724    2599 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c7b49716-3ace-4016-a398-caf565d5c035" path="/var/lib/kubelet/pods/c7b49716-3ace-4016-a398-caf565d5c035/volumes"
	Feb 29 17:46:07 addons-268800 kubelet[2599]: I0229 17:46:07.073383    2599 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d417867a-c0a3-420c-8830-040f24638a9e" path="/var/lib/kubelet/pods/d417867a-c0a3-420c-8830-040f24638a9e/volumes"
	Feb 29 17:46:12 addons-268800 kubelet[2599]: I0229 17:46:12.037334    2599 scope.go:117] "RemoveContainer" containerID="d0ad59b1d88d9a0c5a2a5deeee1b1a6df98de5f37dbb6b62603a184b82373368"
	Feb 29 17:46:12 addons-268800 kubelet[2599]: E0229 17:46:12.037856    2599 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 40s restarting failed container=gadget pod=gadget-dfbhw_gadget(99b39921-fba5-4efa-9b5d-599d2726406f)\"" pod="gadget/gadget-dfbhw" podUID="99b39921-fba5-4efa-9b5d-599d2726406f"
	Feb 29 17:46:13 addons-268800 kubelet[2599]: E0229 17:46:13.093160    2599 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 17:46:13 addons-268800 kubelet[2599]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 17:46:13 addons-268800 kubelet[2599]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 17:46:13 addons-268800 kubelet[2599]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 17:46:13 addons-268800 kubelet[2599]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 17:46:13 addons-268800 kubelet[2599]: I0229 17:46:13.167734    2599 scope.go:117] "RemoveContainer" containerID="c2f8a0f40700dfdc41aacee583c44b6fe6d3e1361f0512a7f7252fe1e4371831"
	Feb 29 17:46:13 addons-268800 kubelet[2599]: I0229 17:46:13.204882    2599 scope.go:117] "RemoveContainer" containerID="59d3ca770996fb7d000e1f888ad6ec5bc7770e518bb2392dbb3323e852a76821"
	Feb 29 17:46:13 addons-268800 kubelet[2599]: I0229 17:46:13.238102    2599 scope.go:117] "RemoveContainer" containerID="11c82c662a72dfb142ca6abf63a58f316e3dea97f88e5db60fcd9fc84742926a"
	Feb 29 17:46:18 addons-268800 kubelet[2599]: I0229 17:46:18.006278    2599 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvwrc\" (UniqueName: \"kubernetes.io/projected/e4152ace-bd12-4208-b9ec-8e44ad53e5db-kube-api-access-xvwrc\") pod \"e4152ace-bd12-4208-b9ec-8e44ad53e5db\" (UID: \"e4152ace-bd12-4208-b9ec-8e44ad53e5db\") "
	Feb 29 17:46:18 addons-268800 kubelet[2599]: I0229 17:46:18.012830    2599 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4152ace-bd12-4208-b9ec-8e44ad53e5db-kube-api-access-xvwrc" (OuterVolumeSpecName: "kube-api-access-xvwrc") pod "e4152ace-bd12-4208-b9ec-8e44ad53e5db" (UID: "e4152ace-bd12-4208-b9ec-8e44ad53e5db"). InnerVolumeSpecName "kube-api-access-xvwrc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 29 17:46:18 addons-268800 kubelet[2599]: I0229 17:46:18.107852    2599 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xvwrc\" (UniqueName: \"kubernetes.io/projected/e4152ace-bd12-4208-b9ec-8e44ad53e5db-kube-api-access-xvwrc\") on node \"addons-268800\" DevicePath \"\""
	Feb 29 17:46:18 addons-268800 kubelet[2599]: I0229 17:46:18.730347    2599 scope.go:117] "RemoveContainer" containerID="b569a2d46175b3994c6797d2ff8d4b7ada38cb1422431f826323c848c6f0c880"
	Feb 29 17:46:18 addons-268800 kubelet[2599]: I0229 17:46:18.764630    2599 scope.go:117] "RemoveContainer" containerID="b569a2d46175b3994c6797d2ff8d4b7ada38cb1422431f826323c848c6f0c880"
	Feb 29 17:46:18 addons-268800 kubelet[2599]: E0229 17:46:18.765928    2599 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b569a2d46175b3994c6797d2ff8d4b7ada38cb1422431f826323c848c6f0c880" containerID="b569a2d46175b3994c6797d2ff8d4b7ada38cb1422431f826323c848c6f0c880"
	Feb 29 17:46:18 addons-268800 kubelet[2599]: I0229 17:46:18.766035    2599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b569a2d46175b3994c6797d2ff8d4b7ada38cb1422431f826323c848c6f0c880"} err="failed to get container status \"b569a2d46175b3994c6797d2ff8d4b7ada38cb1422431f826323c848c6f0c880\": rpc error: code = Unknown desc = Error response from daemon: No such container: b569a2d46175b3994c6797d2ff8d4b7ada38cb1422431f826323c848c6f0c880"
	Feb 29 17:46:19 addons-268800 kubelet[2599]: I0229 17:46:19.053227    2599 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e4152ace-bd12-4208-b9ec-8e44ad53e5db" path="/var/lib/kubelet/pods/e4152ace-bd12-4208-b9ec-8e44ad53e5db/volumes"
	Feb 29 17:46:24 addons-268800 kubelet[2599]: I0229 17:46:24.037780    2599 scope.go:117] "RemoveContainer" containerID="d0ad59b1d88d9a0c5a2a5deeee1b1a6df98de5f37dbb6b62603a184b82373368"
	
	
	==> storage-provisioner [863dc2cc1205] <==
	I0229 17:42:55.811390       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 17:42:55.885175       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 17:42:55.885236       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 17:42:55.950001       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 17:42:56.075594       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-268800_c4fd88b1-1ffb-4ae2-b08f-b506daac13ff!
	I0229 17:42:55.974194       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d7f4fb24-5337-4952-a318-7b641dc0bc3d", APIVersion:"v1", ResourceVersion:"822", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-268800_c4fd88b1-1ffb-4ae2-b08f-b506daac13ff became leader
	I0229 17:42:56.286886       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-268800_c4fd88b1-1ffb-4ae2-b08f-b506daac13ff!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:46:15.779464    4284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-268800 -n addons-268800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-268800 -n addons-268800: (11.5468776s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-268800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: headlamp-7ddfbb94ff-t4hjv ingress-nginx-admission-create-rpxr5 ingress-nginx-admission-patch-77vlx
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-268800 describe pod headlamp-7ddfbb94ff-t4hjv ingress-nginx-admission-create-rpxr5 ingress-nginx-admission-patch-77vlx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-268800 describe pod headlamp-7ddfbb94ff-t4hjv ingress-nginx-admission-create-rpxr5 ingress-nginx-admission-patch-77vlx: exit status 1 (152.676ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "headlamp-7ddfbb94ff-t4hjv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-rpxr5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-77vlx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-268800 describe pod headlamp-7ddfbb94ff-t4hjv ingress-nginx-admission-create-rpxr5 ingress-nginx-admission-patch-77vlx: exit status 1
--- FAIL: TestAddons/parallel/Registry (65.18s)

                                                
                                    
x
+
TestErrorSpam/setup (176.74s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-954700 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 --driver=hyperv
E0229 17:50:31.651381    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 17:50:31.667080    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 17:50:31.682584    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 17:50:31.713569    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 17:50:31.761547    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 17:50:31.855331    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 17:50:32.028236    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 17:50:32.360784    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 17:50:33.010089    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 17:50:34.300518    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 17:50:36.865562    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 17:50:41.988528    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 17:50:52.234193    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 17:51:12.730325    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 17:51:53.704464    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-954700 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 --driver=hyperv: (2m56.7405727s)
error_spam_test.go:96: unexpected stderr: "W0229 17:50:01.197532    9020 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-954700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
- KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
- MINIKUBE_LOCATION=18259
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting control plane node nospam-954700 in cluster nospam-954700
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-954700" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0229 17:50:01.197532    9020 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (176.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (30.61s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-070600 -n functional-070600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-070600 -n functional-070600: (10.944567s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 logs -n 25: (7.7110568s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-954700 --log_dir                                     | nospam-954700     | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:54 UTC | 29 Feb 24 17:54 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-954700 --log_dir                                     | nospam-954700     | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:54 UTC | 29 Feb 24 17:54 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-954700 --log_dir                                     | nospam-954700     | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:54 UTC | 29 Feb 24 17:54 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-954700 --log_dir                                     | nospam-954700     | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:54 UTC | 29 Feb 24 17:54 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-954700 --log_dir                                     | nospam-954700     | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:54 UTC | 29 Feb 24 17:55 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-954700 --log_dir                                     | nospam-954700     | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:55 UTC | 29 Feb 24 17:55 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-954700 --log_dir                                     | nospam-954700     | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:55 UTC | 29 Feb 24 17:55 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-954700                                            | nospam-954700     | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:55 UTC | 29 Feb 24 17:55 UTC |
	| start   | -p functional-070600                                        | functional-070600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:55 UTC | 29 Feb 24 17:59 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-070600                                        | functional-070600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:59 UTC | 29 Feb 24 18:01 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-070600 cache add                                 | functional-070600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:01 UTC | 29 Feb 24 18:01 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-070600 cache add                                 | functional-070600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:01 UTC | 29 Feb 24 18:01 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-070600 cache add                                 | functional-070600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:01 UTC | 29 Feb 24 18:01 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-070600 cache add                                 | functional-070600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:01 UTC | 29 Feb 24 18:01 UTC |
	|         | minikube-local-cache-test:functional-070600                 |                   |                   |         |                     |                     |
	| cache   | functional-070600 cache delete                              | functional-070600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:01 UTC | 29 Feb 24 18:01 UTC |
	|         | minikube-local-cache-test:functional-070600                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:01 UTC | 29 Feb 24 18:01 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:01 UTC | 29 Feb 24 18:01 UTC |
	| ssh     | functional-070600 ssh sudo                                  | functional-070600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:01 UTC | 29 Feb 24 18:01 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-070600                                           | functional-070600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:01 UTC | 29 Feb 24 18:01 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-070600 ssh                                       | functional-070600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:01 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-070600 cache reload                              | functional-070600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:02 UTC | 29 Feb 24 18:02 UTC |
	| ssh     | functional-070600 ssh                                       | functional-070600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:02 UTC | 29 Feb 24 18:02 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:02 UTC | 29 Feb 24 18:02 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:02 UTC | 29 Feb 24 18:02 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-070600 kubectl --                                | functional-070600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:02 UTC | 29 Feb 24 18:02 UTC |
	|         | --context functional-070600                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:59:16
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:59:16.297749    6380 out.go:291] Setting OutFile to fd 260 ...
	I0229 17:59:16.298429    6380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:59:16.298429    6380 out.go:304] Setting ErrFile to fd 612...
	I0229 17:59:16.298524    6380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:59:16.317735    6380 out.go:298] Setting JSON to false
	I0229 17:59:16.320697    6380 start.go:129] hostinfo: {"hostname":"minikube5","uptime":51293,"bootTime":1709178262,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 17:59:16.320697    6380 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 17:59:16.321697    6380 out.go:177] * [functional-070600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 17:59:16.322696    6380 notify.go:220] Checking for updates...
	I0229 17:59:16.322696    6380 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 17:59:16.323706    6380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 17:59:16.324704    6380 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 17:59:16.324704    6380 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 17:59:16.325698    6380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 17:59:16.326709    6380 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 17:59:16.326709    6380 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:59:21.348264    6380 out.go:177] * Using the hyperv driver based on existing profile
	I0229 17:59:21.349184    6380 start.go:299] selected driver: hyperv
	I0229 17:59:21.349184    6380 start.go:903] validating driver "hyperv" against &{Name:functional-070600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:functional-070600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.26.52.106 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:59:21.349336    6380 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 17:59:21.394158    6380 cni.go:84] Creating CNI manager for ""
	I0229 17:59:21.394158    6380 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 17:59:21.394158    6380 start_flags.go:323] config:
	{Name:functional-070600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-070600 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.26.52.106 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:59:21.394796    6380 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:59:21.395413    6380 out.go:177] * Starting control plane node functional-070600 in cluster functional-070600
	I0229 17:59:21.396366    6380 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 17:59:21.396366    6380 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 17:59:21.396366    6380 cache.go:56] Caching tarball of preloaded images
	I0229 17:59:21.396366    6380 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 17:59:21.396366    6380 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 17:59:21.396366    6380 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\config.json ...
	I0229 17:59:21.398974    6380 start.go:365] acquiring machines lock for functional-070600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 17:59:21.398974    6380 start.go:369] acquired machines lock for "functional-070600" in 0s
	I0229 17:59:21.399714    6380 start.go:96] Skipping create...Using existing machine configuration
	I0229 17:59:21.399771    6380 fix.go:54] fixHost starting: 
	I0229 17:59:21.399906    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 17:59:24.045560    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:59:24.045857    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:24.045857    6380 fix.go:102] recreateIfNeeded on functional-070600: state=Running err=<nil>
	W0229 17:59:24.045857    6380 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 17:59:24.046774    6380 out.go:177] * Updating the running hyperv "functional-070600" VM ...
	I0229 17:59:24.049157    6380 machine.go:88] provisioning docker machine ...
	I0229 17:59:24.049216    6380 buildroot.go:166] provisioning hostname "functional-070600"
	I0229 17:59:24.049275    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 17:59:26.077909    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:59:26.079021    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:26.079021    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 17:59:28.496309    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 17:59:28.496775    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:28.500900    6380 main.go:141] libmachine: Using SSH client type: native
	I0229 17:59:28.501238    6380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.52.106 22 <nil> <nil>}
	I0229 17:59:28.501238    6380 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-070600 && echo "functional-070600" | sudo tee /etc/hostname
	I0229 17:59:28.672861    6380 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-070600
	
	I0229 17:59:28.672861    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 17:59:30.661106    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:59:30.661106    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:30.662024    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 17:59:33.039886    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 17:59:33.039886    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:33.044461    6380 main.go:141] libmachine: Using SSH client type: native
	I0229 17:59:33.045083    6380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.52.106 22 <nil> <nil>}
	I0229 17:59:33.045083    6380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-070600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-070600/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-070600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 17:59:33.194592    6380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 17:59:33.194592    6380 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 17:59:33.194592    6380 buildroot.go:174] setting up certificates
	I0229 17:59:33.194592    6380 provision.go:83] configureAuth start
	I0229 17:59:33.195113    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 17:59:35.179537    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:59:35.179736    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:35.179789    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 17:59:37.554832    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 17:59:37.554883    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:37.554883    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 17:59:39.576796    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:59:39.576796    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:39.577449    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 17:59:41.998035    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 17:59:41.998035    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:41.998035    6380 provision.go:138] copyHostCerts
	I0229 17:59:41.998262    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 17:59:41.998454    6380 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 17:59:41.998454    6380 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 17:59:41.998882    6380 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 17:59:41.999683    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 17:59:41.999784    6380 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 17:59:41.999893    6380 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 17:59:42.000107    6380 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 17:59:42.000884    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 17:59:42.001104    6380 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 17:59:42.001104    6380 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 17:59:42.001317    6380 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 17:59:42.002116    6380 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-070600 san=[172.26.52.106 172.26.52.106 localhost 127.0.0.1 minikube functional-070600]
	I0229 17:59:42.136645    6380 provision.go:172] copyRemoteCerts
	I0229 17:59:42.145687    6380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 17:59:42.145687    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 17:59:44.154390    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:59:44.154484    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:44.154484    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 17:59:46.561326    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 17:59:46.561326    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:46.561415    6380 sshutil.go:53] new ssh client: &{IP:172.26.52.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-070600\id_rsa Username:docker}
	I0229 17:59:46.675559    6380 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5296207s)
	I0229 17:59:46.675559    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 17:59:46.675559    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 17:59:46.726690    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 17:59:46.727351    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0229 17:59:46.780254    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 17:59:46.780974    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 17:59:46.830386    6380 provision.go:86] duration metric: configureAuth took 13.634938s
	I0229 17:59:46.830386    6380 buildroot.go:189] setting minikube options for container-runtime
	I0229 17:59:46.830941    6380 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 17:59:46.831101    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 17:59:48.822782    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:59:48.822782    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:48.823889    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 17:59:51.224822    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 17:59:51.224986    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:51.229133    6380 main.go:141] libmachine: Using SSH client type: native
	I0229 17:59:51.229479    6380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.52.106 22 <nil> <nil>}
	I0229 17:59:51.229479    6380 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 17:59:51.362162    6380 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 17:59:51.362162    6380 buildroot.go:70] root file system type: tmpfs
	I0229 17:59:51.362812    6380 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 17:59:51.362916    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 17:59:53.384654    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:59:53.384654    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:53.384728    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 17:59:55.782656    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 17:59:55.782656    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:55.787032    6380 main.go:141] libmachine: Using SSH client type: native
	I0229 17:59:55.787414    6380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.52.106 22 <nil> <nil>}
	I0229 17:59:55.787544    6380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 17:59:55.962453    6380 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 17:59:55.963041    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 17:59:57.941812    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 17:59:57.941812    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 17:59:57.941812    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:00:00.395440    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 18:00:00.396377    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:00.400016    6380 main.go:141] libmachine: Using SSH client type: native
	I0229 18:00:00.400375    6380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.52.106 22 <nil> <nil>}
	I0229 18:00:00.400460    6380 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 18:00:00.543118    6380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:00:00.543118    6380 machine.go:91] provisioned docker machine in 36.4918769s
	I0229 18:00:00.543118    6380 start.go:300] post-start starting for "functional-070600" (driver="hyperv")
	I0229 18:00:00.543118    6380 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:00:00.552576    6380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:00:00.552576    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 18:00:02.548432    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:00:02.548432    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:02.548614    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:00:04.964156    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 18:00:04.964156    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:04.964686    6380 sshutil.go:53] new ssh client: &{IP:172.26.52.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-070600\id_rsa Username:docker}
	I0229 18:00:05.074871    6380 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5219852s)
	I0229 18:00:05.084310    6380 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:00:05.091089    6380 command_runner.go:130] > NAME=Buildroot
	I0229 18:00:05.091089    6380 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 18:00:05.091089    6380 command_runner.go:130] > ID=buildroot
	I0229 18:00:05.091089    6380 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 18:00:05.091089    6380 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 18:00:05.091620    6380 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:00:05.091620    6380 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 18:00:05.091967    6380 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 18:00:05.092276    6380 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> 43562.pem in /etc/ssl/certs
	I0229 18:00:05.092276    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> /etc/ssl/certs/43562.pem
	I0229 18:00:05.092947    6380 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\4356\hosts -> hosts in /etc/test/nested/copy/4356
	I0229 18:00:05.092947    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\4356\hosts -> /etc/test/nested/copy/4356/hosts
	I0229 18:00:05.103944    6380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4356
	I0229 18:00:05.124234    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /etc/ssl/certs/43562.pem (1708 bytes)
	I0229 18:00:05.170325    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\4356\hosts --> /etc/test/nested/copy/4356/hosts (40 bytes)
	I0229 18:00:05.223030    6380 start.go:303] post-start completed in 4.6796527s
	I0229 18:00:05.223120    6380 fix.go:56] fixHost completed within 43.8209747s
	I0229 18:00:05.223216    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 18:00:07.237613    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:00:07.237613    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:07.237613    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:00:09.666883    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 18:00:09.667670    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:09.671891    6380 main.go:141] libmachine: Using SSH client type: native
	I0229 18:00:09.672039    6380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.52.106 22 <nil> <nil>}
	I0229 18:00:09.672039    6380 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:00:09.804698    6380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709229609.974788867
	
	I0229 18:00:09.804698    6380 fix.go:206] guest clock: 1709229609.974788867
	I0229 18:00:09.804698    6380 fix.go:219] Guest: 2024-02-29 18:00:09.974788867 +0000 UTC Remote: 2024-02-29 18:00:05.2231206 +0000 UTC m=+49.066292201 (delta=4.751668267s)
	I0229 18:00:09.804698    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 18:00:11.789684    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:00:11.789684    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:11.789684    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:00:14.257458    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 18:00:14.257458    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:14.261436    6380 main.go:141] libmachine: Using SSH client type: native
	I0229 18:00:14.261721    6380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.52.106 22 <nil> <nil>}
	I0229 18:00:14.261721    6380 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709229609
	I0229 18:00:14.419883    6380 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 18:00:09 UTC 2024
	
	I0229 18:00:14.419980    6380 fix.go:226] clock set: Thu Feb 29 18:00:09 UTC 2024
	 (err=<nil>)
	I0229 18:00:14.419980    6380 start.go:83] releasing machines lock for "functional-070600", held for 53.0174502s
	I0229 18:00:14.420250    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 18:00:16.449504    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:00:16.449504    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:16.449584    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:00:18.895485    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 18:00:18.895485    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:18.899804    6380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:00:18.899968    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 18:00:18.907819    6380 ssh_runner.go:195] Run: cat /version.json
	I0229 18:00:18.907819    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 18:00:20.951366    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:00:20.951366    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:20.951531    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:00:20.956822    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:00:20.956908    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:20.956908    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:00:23.424533    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 18:00:23.424533    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:23.425268    6380 sshutil.go:53] new ssh client: &{IP:172.26.52.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-070600\id_rsa Username:docker}
	I0229 18:00:23.450206    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 18:00:23.450602    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:23.450945    6380 sshutil.go:53] new ssh client: &{IP:172.26.52.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-070600\id_rsa Username:docker}
	I0229 18:00:23.633762    6380 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 18:00:23.633879    6380 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0229 18:00:23.633879    6380 ssh_runner.go:235] Completed: cat /version.json: (4.7257983s)
	I0229 18:00:23.633879    6380 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7338124s)
	I0229 18:00:23.642650    6380 ssh_runner.go:195] Run: systemctl --version
	I0229 18:00:23.651873    6380 command_runner.go:130] > systemd 252 (252)
	I0229 18:00:23.651873    6380 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0229 18:00:23.660954    6380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 18:00:23.669010    6380 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 18:00:23.669788    6380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:00:23.678770    6380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:00:23.698142    6380 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0229 18:00:23.698142    6380 start.go:475] detecting cgroup driver to use...
	I0229 18:00:23.698432    6380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:00:23.735229    6380 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 18:00:23.739063    6380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 18:00:23.774239    6380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:00:23.797262    6380 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:00:23.808850    6380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:00:23.839449    6380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:00:23.870515    6380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:00:23.904875    6380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:00:23.936610    6380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:00:23.973084    6380 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:00:24.014837    6380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:00:24.034813    6380 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 18:00:24.045816    6380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:00:24.077223    6380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:00:24.349606    6380 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:00:24.386450    6380 start.go:475] detecting cgroup driver to use...
	I0229 18:00:24.400616    6380 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 18:00:24.426617    6380 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 18:00:24.426617    6380 command_runner.go:130] > [Unit]
	I0229 18:00:24.426617    6380 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 18:00:24.426617    6380 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 18:00:24.426617    6380 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 18:00:24.426617    6380 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 18:00:24.426617    6380 command_runner.go:130] > StartLimitBurst=3
	I0229 18:00:24.426617    6380 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 18:00:24.426957    6380 command_runner.go:130] > [Service]
	I0229 18:00:24.426957    6380 command_runner.go:130] > Type=notify
	I0229 18:00:24.426957    6380 command_runner.go:130] > Restart=on-failure
	I0229 18:00:24.426957    6380 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 18:00:24.427040    6380 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 18:00:24.427040    6380 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 18:00:24.427040    6380 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 18:00:24.427040    6380 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 18:00:24.427111    6380 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 18:00:24.427111    6380 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 18:00:24.427111    6380 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 18:00:24.427111    6380 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 18:00:24.427111    6380 command_runner.go:130] > ExecStart=
	I0229 18:00:24.427216    6380 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 18:00:24.427242    6380 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 18:00:24.427242    6380 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 18:00:24.427242    6380 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 18:00:24.427242    6380 command_runner.go:130] > LimitNOFILE=infinity
	I0229 18:00:24.427242    6380 command_runner.go:130] > LimitNPROC=infinity
	I0229 18:00:24.427339    6380 command_runner.go:130] > LimitCORE=infinity
	I0229 18:00:24.427339    6380 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 18:00:24.427389    6380 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 18:00:24.427389    6380 command_runner.go:130] > TasksMax=infinity
	I0229 18:00:24.427389    6380 command_runner.go:130] > TimeoutStartSec=0
	I0229 18:00:24.427389    6380 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 18:00:24.427389    6380 command_runner.go:130] > Delegate=yes
	I0229 18:00:24.427461    6380 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 18:00:24.427461    6380 command_runner.go:130] > KillMode=process
	I0229 18:00:24.427461    6380 command_runner.go:130] > [Install]
	I0229 18:00:24.427510    6380 command_runner.go:130] > WantedBy=multi-user.target
	I0229 18:00:24.437081    6380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:00:24.473856    6380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:00:24.515610    6380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:00:24.549618    6380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:00:24.573860    6380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:00:24.608480    6380 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 18:00:24.619011    6380 ssh_runner.go:195] Run: which cri-dockerd
	I0229 18:00:24.625202    6380 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 18:00:24.637795    6380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 18:00:24.659206    6380 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 18:00:24.701942    6380 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 18:00:24.966087    6380 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 18:00:25.217452    6380 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 18:00:25.217652    6380 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 18:00:25.268492    6380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:00:25.537406    6380 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:00:37.354774    6380 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.816615s)
	I0229 18:00:37.367490    6380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 18:00:37.403226    6380 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0229 18:00:37.445017    6380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:00:37.479424    6380 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 18:00:37.686801    6380 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 18:00:37.898137    6380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:00:38.105883    6380 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 18:00:38.146991    6380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:00:38.181347    6380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:00:38.389508    6380 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 18:00:38.514847    6380 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 18:00:38.525820    6380 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 18:00:38.534637    6380 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 18:00:38.534637    6380 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 18:00:38.534738    6380 command_runner.go:130] > Device: 0,22	Inode: 1399        Links: 1
	I0229 18:00:38.534738    6380 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 18:00:38.534738    6380 command_runner.go:130] > Access: 2024-02-29 18:00:38.591223211 +0000
	I0229 18:00:38.534738    6380 command_runner.go:130] > Modify: 2024-02-29 18:00:38.591223211 +0000
	I0229 18:00:38.534738    6380 command_runner.go:130] > Change: 2024-02-29 18:00:38.595223506 +0000
	I0229 18:00:38.534738    6380 command_runner.go:130] >  Birth: -
	I0229 18:00:38.534816    6380 start.go:543] Will wait 60s for crictl version
	I0229 18:00:38.544063    6380 ssh_runner.go:195] Run: which crictl
	I0229 18:00:38.549661    6380 command_runner.go:130] > /usr/bin/crictl
	I0229 18:00:38.558842    6380 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:00:38.629911    6380 command_runner.go:130] > Version:  0.1.0
	I0229 18:00:38.629911    6380 command_runner.go:130] > RuntimeName:  docker
	I0229 18:00:38.629911    6380 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 18:00:38.629911    6380 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 18:00:38.629911    6380 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 18:00:38.639850    6380 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:00:38.671745    6380 command_runner.go:130] > 24.0.7
	I0229 18:00:38.681458    6380 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:00:38.713427    6380 command_runner.go:130] > 24.0.7
	I0229 18:00:38.715023    6380 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 18:00:38.715255    6380 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 18:00:38.720040    6380 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 18:00:38.720040    6380 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 18:00:38.720040    6380 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 18:00:38.720040    6380 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:76:3f:19 Flags:up|broadcast|multicast|running}
	I0229 18:00:38.723679    6380 ip.go:210] interface addr: fe80::841a:4367:8c9:abc/64
	I0229 18:00:38.723721    6380 ip.go:210] interface addr: 172.26.48.1/20
	I0229 18:00:38.737398    6380 ssh_runner.go:195] Run: grep 172.26.48.1	host.minikube.internal$ /etc/hosts
	I0229 18:00:38.743094    6380 command_runner.go:130] > 172.26.48.1	host.minikube.internal
	I0229 18:00:38.743623    6380 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 18:00:38.750263    6380 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:00:38.775885    6380 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 18:00:38.775885    6380 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 18:00:38.775885    6380 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 18:00:38.775885    6380 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 18:00:38.775885    6380 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 18:00:38.775885    6380 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 18:00:38.775885    6380 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 18:00:38.775885    6380 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:00:38.775885    6380 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 18:00:38.775885    6380 docker.go:615] Images already preloaded, skipping extraction
	I0229 18:00:38.784882    6380 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:00:38.814689    6380 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 18:00:38.815522    6380 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 18:00:38.815522    6380 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 18:00:38.815522    6380 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 18:00:38.815522    6380 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 18:00:38.815522    6380 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 18:00:38.815522    6380 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 18:00:38.815522    6380 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:00:38.815798    6380 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 18:00:38.815798    6380 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:00:38.822712    6380 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:00:38.856913    6380 command_runner.go:130] > cgroupfs
	I0229 18:00:38.858111    6380 cni.go:84] Creating CNI manager for ""
	I0229 18:00:38.858377    6380 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 18:00:38.858414    6380 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:00:38.858508    6380 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.52.106 APIServerPort:8441 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-070600 NodeName:functional-070600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.52.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.52.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:00:38.858840    6380 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.52.106
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-070600"
	  kubeletExtraArgs:
	    node-ip: 172.26.52.106
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.52.106"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:00:38.858840    6380 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-070600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.52.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:functional-070600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0229 18:00:38.868460    6380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:00:38.888888    6380 command_runner.go:130] > kubeadm
	I0229 18:00:38.888888    6380 command_runner.go:130] > kubectl
	I0229 18:00:38.888888    6380 command_runner.go:130] > kubelet
	I0229 18:00:38.888888    6380 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:00:38.899300    6380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:00:38.925800    6380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0229 18:00:38.956424    6380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:00:38.989066    6380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0229 18:00:39.029156    6380 ssh_runner.go:195] Run: grep 172.26.52.106	control-plane.minikube.internal$ /etc/hosts
	I0229 18:00:39.038468    6380 command_runner.go:130] > 172.26.52.106	control-plane.minikube.internal
	I0229 18:00:39.038468    6380 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600 for IP: 172.26.52.106
	I0229 18:00:39.038668    6380 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:00:39.039437    6380 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 18:00:39.039788    6380 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 18:00:39.040058    6380 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.key
	I0229 18:00:39.040586    6380 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\apiserver.key.0fc5cfc1
	I0229 18:00:39.040828    6380 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\proxy-client.key
	I0229 18:00:39.040906    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 18:00:39.040976    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 18:00:39.041054    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 18:00:39.041210    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 18:00:39.041297    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 18:00:39.041410    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 18:00:39.041503    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 18:00:39.041583    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 18:00:39.041927    6380 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem (1338 bytes)
	W0229 18:00:39.042167    6380 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356_empty.pem, impossibly tiny 0 bytes
	I0229 18:00:39.042167    6380 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 18:00:39.042385    6380 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 18:00:39.042625    6380 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 18:00:39.042815    6380 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 18:00:39.042956    6380 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem (1708 bytes)
	I0229 18:00:39.042956    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> /usr/share/ca-certificates/43562.pem
	I0229 18:00:39.042956    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:00:39.042956    6380 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem -> /usr/share/ca-certificates/4356.pem
	I0229 18:00:39.044030    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:00:39.093948    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 18:00:39.144899    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:00:39.192125    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:00:39.234610    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:00:39.282687    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:00:39.331348    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:00:39.377244    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 18:00:39.429195    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /usr/share/ca-certificates/43562.pem (1708 bytes)
	I0229 18:00:39.468146    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:00:39.534113    6380 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem --> /usr/share/ca-certificates/4356.pem (1338 bytes)
	I0229 18:00:39.580880    6380 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:00:39.624402    6380 ssh_runner.go:195] Run: openssl version
	I0229 18:00:39.637401    6380 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 18:00:39.647161    6380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43562.pem && ln -fs /usr/share/ca-certificates/43562.pem /etc/ssl/certs/43562.pem"
	I0229 18:00:39.681631    6380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43562.pem
	I0229 18:00:39.688685    6380 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 17:55 /usr/share/ca-certificates/43562.pem
	I0229 18:00:39.688685    6380 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:55 /usr/share/ca-certificates/43562.pem
	I0229 18:00:39.701839    6380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43562.pem
	I0229 18:00:39.711892    6380 command_runner.go:130] > 3ec20f2e
	I0229 18:00:39.722341    6380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43562.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:00:39.748967    6380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:00:39.780473    6380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:00:39.792762    6380 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:00:39.792762    6380 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:00:39.805648    6380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:00:39.815690    6380 command_runner.go:130] > b5213941
	I0229 18:00:39.825533    6380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:00:39.853325    6380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4356.pem && ln -fs /usr/share/ca-certificates/4356.pem /etc/ssl/certs/4356.pem"
	I0229 18:00:39.886097    6380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4356.pem
	I0229 18:00:39.893862    6380 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 17:55 /usr/share/ca-certificates/4356.pem
	I0229 18:00:39.893937    6380 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:55 /usr/share/ca-certificates/4356.pem
	I0229 18:00:39.902862    6380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4356.pem
	I0229 18:00:39.912000    6380 command_runner.go:130] > 51391683
	I0229 18:00:39.921100    6380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4356.pem /etc/ssl/certs/51391683.0"
	I0229 18:00:39.948199    6380 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:00:39.955619    6380 command_runner.go:130] > ca.crt
	I0229 18:00:39.955688    6380 command_runner.go:130] > ca.key
	I0229 18:00:39.955688    6380 command_runner.go:130] > healthcheck-client.crt
	I0229 18:00:39.955688    6380 command_runner.go:130] > healthcheck-client.key
	I0229 18:00:39.955688    6380 command_runner.go:130] > peer.crt
	I0229 18:00:39.955688    6380 command_runner.go:130] > peer.key
	I0229 18:00:39.955688    6380 command_runner.go:130] > server.crt
	I0229 18:00:39.955688    6380 command_runner.go:130] > server.key
	I0229 18:00:39.964521    6380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 18:00:39.973259    6380 command_runner.go:130] > Certificate will not expire
	I0229 18:00:39.981985    6380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 18:00:39.990318    6380 command_runner.go:130] > Certificate will not expire
	I0229 18:00:40.000817    6380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 18:00:40.009606    6380 command_runner.go:130] > Certificate will not expire
	I0229 18:00:40.019425    6380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 18:00:40.028559    6380 command_runner.go:130] > Certificate will not expire
	I0229 18:00:40.037210    6380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 18:00:40.046349    6380 command_runner.go:130] > Certificate will not expire
	I0229 18:00:40.055110    6380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 18:00:40.063515    6380 command_runner.go:130] > Certificate will not expire
	I0229 18:00:40.063887    6380 kubeadm.go:404] StartCluster: {Name:functional-070600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.28.4 ClusterName:functional-070600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.26.52.106 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:00:40.070425    6380 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:00:40.106528    6380 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:00:40.124170    6380 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0229 18:00:40.124221    6380 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0229 18:00:40.124221    6380 command_runner.go:130] > /var/lib/minikube/etcd:
	I0229 18:00:40.124221    6380 command_runner.go:130] > member
	I0229 18:00:40.124221    6380 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 18:00:40.124221    6380 kubeadm.go:636] restartCluster start
	I0229 18:00:40.134691    6380 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 18:00:40.159079    6380 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:00:40.159673    6380 kubeconfig.go:92] found "functional-070600" server: "https://172.26.52.106:8441"
	I0229 18:00:40.161112    6380 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 18:00:40.161783    6380 kapi.go:59] client config for functional-070600: &rest.Config{Host:"https://172.26.52.106:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-070600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-070600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:00:40.162655    6380 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 18:00:40.171802    6380 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 18:00:40.192736    6380 api_server.go:166] Checking apiserver status ...
	I0229 18:00:40.202269    6380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:00:40.224770    6380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:00:40.704267    6380 api_server.go:166] Checking apiserver status ...
	I0229 18:00:40.715152    6380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:00:40.738479    6380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:00:41.203955    6380 api_server.go:166] Checking apiserver status ...
	I0229 18:00:41.212827    6380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:00:41.239520    6380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:00:41.703714    6380 api_server.go:166] Checking apiserver status ...
	I0229 18:00:41.715118    6380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:00:41.740653    6380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:00:42.205748    6380 api_server.go:166] Checking apiserver status ...
	I0229 18:00:42.215420    6380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:00:42.239477    6380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:00:42.702153    6380 api_server.go:166] Checking apiserver status ...
	I0229 18:00:42.710728    6380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:00:42.735573    6380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:00:43.204783    6380 api_server.go:166] Checking apiserver status ...
	I0229 18:00:43.217485    6380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:00:43.240382    6380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:00:43.707194    6380 api_server.go:166] Checking apiserver status ...
	I0229 18:00:43.719005    6380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0229 18:00:43.751432    6380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:00:44.201539    6380 api_server.go:166] Checking apiserver status ...
	I0229 18:00:44.213040    6380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:00:44.259069    6380 command_runner.go:130] > 6471
	I0229 18:00:44.267063    6380 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6471/cgroup
	W0229 18:00:44.287694    6380 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6471/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 18:00:44.298742    6380 ssh_runner.go:195] Run: ls
	I0229 18:00:44.305699    6380 api_server.go:253] Checking apiserver healthz at https://172.26.52.106:8441/healthz ...
	I0229 18:00:48.201685    6380 api_server.go:279] https://172.26.52.106:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:00:48.202162    6380 retry.go:31] will retry after 308.757403ms: https://172.26.52.106:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 18:00:48.518804    6380 api_server.go:253] Checking apiserver healthz at https://172.26.52.106:8441/healthz ...
	I0229 18:00:48.535166    6380 api_server.go:279] https://172.26.52.106:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:00:48.535241    6380 retry.go:31] will retry after 286.731633ms: https://172.26.52.106:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:00:48.832754    6380 api_server.go:253] Checking apiserver healthz at https://172.26.52.106:8441/healthz ...
	I0229 18:00:48.841313    6380 api_server.go:279] https://172.26.52.106:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:00:48.841450    6380 retry.go:31] will retry after 373.426225ms: https://172.26.52.106:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:00:49.225652    6380 api_server.go:253] Checking apiserver healthz at https://172.26.52.106:8441/healthz ...
	I0229 18:00:49.235987    6380 api_server.go:279] https://172.26.52.106:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:00:49.236348    6380 retry.go:31] will retry after 534.372942ms: https://172.26.52.106:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 18:00:49.776719    6380 api_server.go:253] Checking apiserver healthz at https://172.26.52.106:8441/healthz ...
	I0229 18:00:49.787450    6380 api_server.go:279] https://172.26.52.106:8441/healthz returned 200:
	ok
	I0229 18:00:49.788300    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods
	I0229 18:00:49.788300    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:49.788300    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:49.788300    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:49.804566    6380 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0229 18:00:49.804566    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:49.804566    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:49 GMT
	I0229 18:00:49.804566    6380 round_trippers.go:580]     Audit-Id: b8c352cf-f82b-45de-97f6-9e026d0c1f5d
	I0229 18:00:49.804566    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:49.804566    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:49.804566    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:49.804566    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:49.804566    6380 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rlkxp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0537edd-1cdc-4d52-9c2a-743c59b3d0a1","resourceVersion":"526","creationTimestamp":"2024-02-29T17:58:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0cbc3bda-77fb-4b51-90c0-24cd4a31cc19","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0cbc3bda-77fb-4b51-90c0-24cd4a31cc19\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49005 chars]
	I0229 18:00:49.809865    6380 system_pods.go:86] 7 kube-system pods found
	I0229 18:00:49.809865    6380 system_pods.go:89] "coredns-5dd5756b68-rlkxp" [c0537edd-1cdc-4d52-9c2a-743c59b3d0a1] Running
	I0229 18:00:49.809865    6380 system_pods.go:89] "etcd-functional-070600" [15ee6939-45d3-4680-9f03-ce44934af9d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 18:00:49.809964    6380 system_pods.go:89] "kube-apiserver-functional-070600" [65eef792-eafc-48a8-8865-f7f01371fa6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 18:00:49.809995    6380 system_pods.go:89] "kube-controller-manager-functional-070600" [36899a27-3284-4e92-9288-866d7a3c97ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 18:00:49.809995    6380 system_pods.go:89] "kube-proxy-wj6dl" [f2beec7d-0917-4c10-bbe6-303accd46692] Running
	I0229 18:00:49.809995    6380 system_pods.go:89] "kube-scheduler-functional-070600" [c6d69b9e-84ea-4827-bc4b-2a9387081024] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 18:00:49.809995    6380 system_pods.go:89] "storage-provisioner" [578ab2cc-0eab-4572-8d30-0cabd99bfa92] Running
	I0229 18:00:49.809995    6380 round_trippers.go:463] GET https://172.26.52.106:8441/version
	I0229 18:00:49.809995    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:49.809995    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:49.809995    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:49.811687    6380 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:00:49.811687    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:49.811687    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:49.811937    6380 round_trippers.go:580]     Content-Length: 264
	I0229 18:00:49.811937    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:49 GMT
	I0229 18:00:49.811937    6380 round_trippers.go:580]     Audit-Id: 7901345e-fe81-4262-a1be-29f622a35b3f
	I0229 18:00:49.811937    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:49.811937    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:49.811937    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:49.811937    6380 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 18:00:49.812037    6380 api_server.go:141] control plane version: v1.28.4
	I0229 18:00:49.812037    6380 kubeadm.go:630] The running cluster does not require reconfiguration: 172.26.52.106
	I0229 18:00:49.812037    6380 kubeadm.go:684] Taking a shortcut, as the cluster seems to be properly configured
	I0229 18:00:49.812126    6380 kubeadm.go:640] restartCluster took 9.6873685s
	I0229 18:00:49.812167    6380 kubeadm.go:406] StartCluster complete in 9.74774s
	I0229 18:00:49.812167    6380 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:00:49.812167    6380 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 18:00:49.813568    6380 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:00:49.815248    6380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:00:49.815340    6380 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:00:49.815340    6380 addons.go:69] Setting storage-provisioner=true in profile "functional-070600"
	I0229 18:00:49.815340    6380 addons.go:69] Setting default-storageclass=true in profile "functional-070600"
	I0229 18:00:49.815340    6380 addons.go:234] Setting addon storage-provisioner=true in "functional-070600"
	W0229 18:00:49.815340    6380 addons.go:243] addon storage-provisioner should already be in state true
	I0229 18:00:49.815340    6380 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-070600"
	I0229 18:00:49.815340    6380 host.go:66] Checking if "functional-070600" exists ...
	I0229 18:00:49.815340    6380 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:00:49.816174    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 18:00:49.816660    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 18:00:49.832542    6380 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 18:00:49.833093    6380 kapi.go:59] client config for functional-070600: &rest.Config{Host:"https://172.26.52.106:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-070600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-070600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:00:49.833961    6380 round_trippers.go:463] GET https://172.26.52.106:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 18:00:49.833961    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:49.833961    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:49.833961    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:49.841168    6380 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 18:00:49.842037    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:49.842037    6380 round_trippers.go:580]     Audit-Id: 9ffdee4f-0576-4ad8-99f3-d6442ece4ed8
	I0229 18:00:49.842037    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:49.842037    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:49.842037    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:49.842037    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:49.842037    6380 round_trippers.go:580]     Content-Length: 291
	I0229 18:00:49.842037    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:50 GMT
	I0229 18:00:49.842130    6380 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ad97a7ec-da56-413a-ba00-72954792d60e","resourceVersion":"454","creationTimestamp":"2024-02-29T17:58:20Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 18:00:49.842389    6380 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-070600" context rescaled to 1 replicas
	I0229 18:00:49.842473    6380 start.go:223] Will wait 6m0s for node &{Name: IP:172.26.52.106 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:00:49.843341    6380 out.go:177] * Verifying Kubernetes components...
	I0229 18:00:49.856149    6380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:00:49.974982    6380 command_runner.go:130] > apiVersion: v1
	I0229 18:00:49.974982    6380 command_runner.go:130] > data:
	I0229 18:00:49.974982    6380 command_runner.go:130] >   Corefile: |
	I0229 18:00:49.974982    6380 command_runner.go:130] >     .:53 {
	I0229 18:00:49.974982    6380 command_runner.go:130] >         log
	I0229 18:00:49.974982    6380 command_runner.go:130] >         errors
	I0229 18:00:49.974982    6380 command_runner.go:130] >         health {
	I0229 18:00:49.974982    6380 command_runner.go:130] >            lameduck 5s
	I0229 18:00:49.974982    6380 command_runner.go:130] >         }
	I0229 18:00:49.974982    6380 command_runner.go:130] >         ready
	I0229 18:00:49.974982    6380 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0229 18:00:49.974982    6380 command_runner.go:130] >            pods insecure
	I0229 18:00:49.974982    6380 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0229 18:00:49.974982    6380 command_runner.go:130] >            ttl 30
	I0229 18:00:49.974982    6380 command_runner.go:130] >         }
	I0229 18:00:49.974982    6380 command_runner.go:130] >         prometheus :9153
	I0229 18:00:49.974982    6380 command_runner.go:130] >         hosts {
	I0229 18:00:49.974982    6380 command_runner.go:130] >            172.26.48.1 host.minikube.internal
	I0229 18:00:49.974982    6380 command_runner.go:130] >            fallthrough
	I0229 18:00:49.974982    6380 command_runner.go:130] >         }
	I0229 18:00:49.974982    6380 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0229 18:00:49.974982    6380 command_runner.go:130] >            max_concurrent 1000
	I0229 18:00:49.974982    6380 command_runner.go:130] >         }
	I0229 18:00:49.974982    6380 command_runner.go:130] >         cache 30
	I0229 18:00:49.974982    6380 command_runner.go:130] >         loop
	I0229 18:00:49.974982    6380 command_runner.go:130] >         reload
	I0229 18:00:49.974982    6380 command_runner.go:130] >         loadbalance
	I0229 18:00:49.974982    6380 command_runner.go:130] >     }
	I0229 18:00:49.974982    6380 command_runner.go:130] > kind: ConfigMap
	I0229 18:00:49.974982    6380 command_runner.go:130] > metadata:
	I0229 18:00:49.974982    6380 command_runner.go:130] >   creationTimestamp: "2024-02-29T17:58:20Z"
	I0229 18:00:49.974982    6380 command_runner.go:130] >   name: coredns
	I0229 18:00:49.974982    6380 command_runner.go:130] >   namespace: kube-system
	I0229 18:00:49.974982    6380 command_runner.go:130] >   resourceVersion: "384"
	I0229 18:00:49.974982    6380 command_runner.go:130] >   uid: 65976193-87f7-4851-8eb0-4cf7bec8f2a9
	I0229 18:00:49.975521    6380 node_ready.go:35] waiting up to 6m0s for node "functional-070600" to be "Ready" ...
	I0229 18:00:49.975801    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:49.975801    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:49.975858    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:49.975858    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:49.975917    6380 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 18:00:49.980524    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:49.980524    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:49.980524    6380 round_trippers.go:580]     Audit-Id: 2b97bccc-515d-4c2d-b1f5-8a3d3fafb439
	I0229 18:00:49.980524    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:49.980524    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:49.980524    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:49.980524    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:49.980524    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:50 GMT
	I0229 18:00:49.981196    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:49.982069    6380 node_ready.go:49] node "functional-070600" has status "Ready":"True"
	I0229 18:00:49.982069    6380 node_ready.go:38] duration metric: took 6.4788ms waiting for node "functional-070600" to be "Ready" ...
	I0229 18:00:49.982069    6380 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:00:49.982069    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods
	I0229 18:00:49.982597    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:49.982597    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:49.982597    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:49.987572    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:49.987572    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:49.987572    6380 round_trippers.go:580]     Audit-Id: 31c81084-f82f-4b02-ba7f-289061a57671
	I0229 18:00:49.988110    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:49.988110    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:49.988110    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:49.988110    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:49.988110    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:50 GMT
	I0229 18:00:49.988380    6380 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"532"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rlkxp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0537edd-1cdc-4d52-9c2a-743c59b3d0a1","resourceVersion":"526","creationTimestamp":"2024-02-29T17:58:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0cbc3bda-77fb-4b51-90c0-24cd4a31cc19","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0cbc3bda-77fb-4b51-90c0-24cd4a31cc19\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 49005 chars]
	I0229 18:00:49.991514    6380 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-rlkxp" in "kube-system" namespace to be "Ready" ...
	I0229 18:00:49.991615    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rlkxp
	I0229 18:00:49.991615    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:49.991615    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:49.991615    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:49.995204    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:00:49.995204    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:49.995204    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:49.995204    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:49.995204    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:49.995204    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:49.995204    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:50 GMT
	I0229 18:00:49.995204    6380 round_trippers.go:580]     Audit-Id: 1a0afbea-b559-4f56-8827-66bab714a0f2
	I0229 18:00:49.995204    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-rlkxp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0537edd-1cdc-4d52-9c2a-743c59b3d0a1","resourceVersion":"526","creationTimestamp":"2024-02-29T17:58:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0cbc3bda-77fb-4b51-90c0-24cd4a31cc19","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0cbc3bda-77fb-4b51-90c0-24cd4a31cc19\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6155 chars]
	I0229 18:00:49.996207    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:49.996207    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:49.996207    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:49.996207    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:49.999206    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:00:49.999206    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:49.999902    6380 round_trippers.go:580]     Audit-Id: 252bac1e-69b9-487c-965c-5a136442419d
	I0229 18:00:49.999902    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:49.999902    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:49.999902    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:49.999902    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:49.999902    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:50 GMT
	I0229 18:00:49.999962    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:50.000597    6380 pod_ready.go:92] pod "coredns-5dd5756b68-rlkxp" in "kube-system" namespace has status "Ready":"True"
	I0229 18:00:50.000640    6380 pod_ready.go:81] duration metric: took 9.0706ms waiting for pod "coredns-5dd5756b68-rlkxp" in "kube-system" namespace to be "Ready" ...
	I0229 18:00:50.000640    6380 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-070600" in "kube-system" namespace to be "Ready" ...
	I0229 18:00:50.000888    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:50.000888    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:50.000888    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:50.000888    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:50.003470    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:00:50.003470    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:50.003470    6380 round_trippers.go:580]     Audit-Id: 07fc8640-daa0-4454-9d79-8242307a4c3a
	I0229 18:00:50.003470    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:50.003470    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:50.003470    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:50.003470    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:50.003470    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:50 GMT
	I0229 18:00:50.004469    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:50.004469    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:50.004469    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:50.004469    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:50.004469    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:50.007474    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:00:50.007474    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:50.007474    6380 round_trippers.go:580]     Audit-Id: 957c46a0-b79f-4e68-974c-da9d0ec0d2f4
	I0229 18:00:50.007474    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:50.007474    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:50.007474    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:50.007474    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:50.007474    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:50 GMT
	I0229 18:00:50.008469    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:50.504617    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:50.504617    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:50.504617    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:50.504617    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:50.510218    6380 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 18:00:50.510306    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:50.510306    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:50 GMT
	I0229 18:00:50.510369    6380 round_trippers.go:580]     Audit-Id: 0bceb278-ce4f-4b9c-a6b5-8e7360c9d3a5
	I0229 18:00:50.510369    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:50.510433    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:50.510433    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:50.510464    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:50.511375    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:50.512444    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:50.512549    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:50.512549    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:50.512650    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:50.515135    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:00:50.515135    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:50.515135    6380 round_trippers.go:580]     Audit-Id: f4d24139-8c46-42bd-aa32-f826c014311e
	I0229 18:00:50.515135    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:50.515135    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:50.515135    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:50.515135    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:50.515135    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:50 GMT
	I0229 18:00:50.515135    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:51.015960    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:51.016041    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:51.016041    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:51.016041    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:51.020622    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:51.020622    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:51.020622    6380 round_trippers.go:580]     Audit-Id: 5322acfb-9446-49cc-b5b5-1bdd4979f7e7
	I0229 18:00:51.020622    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:51.020622    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:51.020890    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:51.020890    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:51.020890    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:51 GMT
	I0229 18:00:51.021151    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:51.021770    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:51.021770    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:51.021770    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:51.021770    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:51.025466    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:00:51.026000    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:51.026000    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:51.026000    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:51.026000    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:51.026068    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:51.026111    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:51 GMT
	I0229 18:00:51.026111    6380 round_trippers.go:580]     Audit-Id: 33199e35-4658-4b5a-8ed9-bcca4c490709
	I0229 18:00:51.026351    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:51.509451    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:51.509451    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:51.509515    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:51.509515    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:51.514008    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:51.514008    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:51.514008    6380 round_trippers.go:580]     Audit-Id: 4ea42abd-69df-47ac-9409-bb1fb23804b6
	I0229 18:00:51.514008    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:51.514088    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:51.514088    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:51.514088    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:51.514088    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:51 GMT
	I0229 18:00:51.514302    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:51.514799    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:51.514799    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:51.514799    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:51.514799    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:51.520768    6380 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 18:00:51.520768    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:51.520768    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:51.521777    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:51 GMT
	I0229 18:00:51.521777    6380 round_trippers.go:580]     Audit-Id: 2202e324-f597-4672-9b52-a86e688966a5
	I0229 18:00:51.521777    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:51.521777    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:51.521777    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:51.521777    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:51.876647    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:00:51.876733    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:51.877328    6380 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:00:51.877689    6380 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:00:51.877689    6380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 18:00:51.877689    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 18:00:51.891226    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:00:51.891226    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:51.892175    6380 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 18:00:51.893371    6380 kapi.go:59] client config for functional-070600: &rest.Config{Host:"https://172.26.52.106:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-070600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-070600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:00:51.893617    6380 addons.go:234] Setting addon default-storageclass=true in "functional-070600"
	W0229 18:00:51.893617    6380 addons.go:243] addon default-storageclass should already be in state true
	I0229 18:00:51.893617    6380 host.go:66] Checking if "functional-070600" exists ...
	I0229 18:00:51.894747    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 18:00:52.002908    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:52.002908    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:52.002908    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:52.002908    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:52.007088    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:00:52.007820    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:52.007820    6380 round_trippers.go:580]     Audit-Id: 7de543e9-01e5-4960-875b-d9258009518e
	I0229 18:00:52.007820    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:52.007820    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:52.007820    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:52.007820    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:52.007820    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:52 GMT
	I0229 18:00:52.008289    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:52.008681    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:52.008681    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:52.008681    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:52.008681    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:52.012305    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:00:52.013271    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:52.013338    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:52 GMT
	I0229 18:00:52.013338    6380 round_trippers.go:580]     Audit-Id: cd069aeb-0047-4ffc-bca2-ddd0a78dd379
	I0229 18:00:52.013338    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:52.013338    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:52.013338    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:52.013338    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:52.013754    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:52.014179    6380 pod_ready.go:102] pod "etcd-functional-070600" in "kube-system" namespace has status "Ready":"False"
	I0229 18:00:52.512897    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:52.513011    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:52.513011    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:52.513011    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:52.516614    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:00:52.516820    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:52.516820    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:52.516820    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:52.516820    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:52 GMT
	I0229 18:00:52.516820    6380 round_trippers.go:580]     Audit-Id: 1ed1727e-6359-4c48-94d9-cc8bff11bef9
	I0229 18:00:52.516820    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:52.516820    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:52.516820    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:52.517873    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:52.517873    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:52.517873    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:52.517873    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:52.525594    6380 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 18:00:52.525678    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:52.525678    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:52.525678    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:52.525760    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:52.525760    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:52 GMT
	I0229 18:00:52.525760    6380 round_trippers.go:580]     Audit-Id: 7f414868-a68f-4190-9116-49ac12b80349
	I0229 18:00:52.525834    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:52.525898    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:53.006263    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:53.006489    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:53.006489    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:53.006489    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:53.010822    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:53.010822    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:53.010822    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:53.010822    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:53.010822    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:53.010822    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:53.010822    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:53 GMT
	I0229 18:00:53.010822    6380 round_trippers.go:580]     Audit-Id: 712751c1-3870-4690-bc7c-27f3a5c8cf82
	I0229 18:00:53.011539    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:53.012126    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:53.012126    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:53.012126    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:53.012126    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:53.016291    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:53.016485    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:53.016485    6380 round_trippers.go:580]     Audit-Id: f3f466d7-2ac1-44f5-8a19-6bb8d5981efe
	I0229 18:00:53.016485    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:53.016485    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:53.016485    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:53.016485    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:53.016485    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:53 GMT
	I0229 18:00:53.016834    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:53.515075    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:53.515075    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:53.515075    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:53.515075    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:53.520045    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:53.520045    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:53.520045    6380 round_trippers.go:580]     Audit-Id: 0b44fe5b-f80c-4100-b8d8-074b2ceda7e1
	I0229 18:00:53.520045    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:53.520045    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:53.520576    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:53.520576    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:53.520576    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:53 GMT
	I0229 18:00:53.520789    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:53.521423    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:53.521473    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:53.521473    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:53.521473    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:53.524914    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:00:53.524914    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:53.524914    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:53.524914    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:53.524914    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:53 GMT
	I0229 18:00:53.524914    6380 round_trippers.go:580]     Audit-Id: a76e8a34-4832-4978-8db2-81a54dd8cc6c
	I0229 18:00:53.524914    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:53.524914    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:53.524914    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:53.906094    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:00:53.906094    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:53.906094    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:00:53.936198    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:00:53.936881    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:53.937018    6380 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 18:00:53.937018    6380 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 18:00:53.937061    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
	I0229 18:00:54.006193    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:54.006193    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:54.006193    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:54.006193    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:54.010665    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:54.011417    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:54.011417    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:54.011417    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:54.011417    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:54 GMT
	I0229 18:00:54.011417    6380 round_trippers.go:580]     Audit-Id: 645a45f3-f8de-4f36-ba7b-9a2bd81c3550
	I0229 18:00:54.011417    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:54.011417    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:54.011676    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:54.012537    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:54.012537    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:54.012634    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:54.012634    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:54.015803    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:00:54.015803    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:54.015803    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:54 GMT
	I0229 18:00:54.015803    6380 round_trippers.go:580]     Audit-Id: f380a5a3-97de-4e7b-a1a5-0f92deacfdb6
	I0229 18:00:54.015803    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:54.015803    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:54.015803    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:54.015803    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:54.015803    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:54.016818    6380 pod_ready.go:102] pod "etcd-functional-070600" in "kube-system" namespace has status "Ready":"False"
	I0229 18:00:54.514860    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:54.514860    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:54.515129    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:54.515129    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:54.519607    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:54.519607    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:54.519607    6380 round_trippers.go:580]     Audit-Id: 56a0fabd-360d-4abc-89d2-35957cb6fa29
	I0229 18:00:54.519607    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:54.519607    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:54.519607    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:54.519607    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:54.519607    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:54 GMT
	I0229 18:00:54.520667    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:54.520997    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:54.520997    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:54.520997    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:54.520997    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:54.525921    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:54.525921    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:54.525921    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:54.525921    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:54.525921    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:54.525921    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:54.525921    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:54 GMT
	I0229 18:00:54.525921    6380 round_trippers.go:580]     Audit-Id: fb9b8acd-bd74-4518-9c4f-299db8ae1c46
	I0229 18:00:54.526329    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:55.007852    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:55.007852    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:55.007951    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:55.007951    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:55.012427    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:55.012427    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:55.012657    6380 round_trippers.go:580]     Audit-Id: 3a2862a8-4cb9-4194-bf51-2a296065b92f
	I0229 18:00:55.012690    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:55.012690    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:55.012690    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:55.012690    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:55.012690    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:55 GMT
	I0229 18:00:55.013104    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:55.013769    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:55.013835    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:55.013835    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:55.013835    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:55.016369    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:00:55.017421    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:55.017421    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:55 GMT
	I0229 18:00:55.017421    6380 round_trippers.go:580]     Audit-Id: bca96d08-2d72-4cc8-abf9-81c63ad4654f
	I0229 18:00:55.017421    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:55.017421    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:55.017421    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:55.017516    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:55.017819    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:55.501689    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:55.501689    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:55.501689    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:55.501689    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:55.506284    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:55.506284    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:55.506284    6380 round_trippers.go:580]     Audit-Id: b393c34c-0319-47c0-bc92-c210a0bcf30b
	I0229 18:00:55.506284    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:55.506755    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:55.506755    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:55.506755    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:55.506755    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:55 GMT
	I0229 18:00:55.507095    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:55.507885    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:55.507950    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:55.507950    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:55.507950    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:55.510371    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:00:55.510371    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:55.510371    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:55.510371    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:55.510371    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:55.510371    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:55 GMT
	I0229 18:00:55.510371    6380 round_trippers.go:580]     Audit-Id: 31ed7cad-f93a-4ddf-8278-8d84815c7e27
	I0229 18:00:55.510371    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:55.511040    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:55.970346    6380 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:00:55.970564    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:55.970614    6380 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:00:56.012714    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:56.012714    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:56.012714    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:56.012714    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:56.017325    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:56.017325    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:56.017325    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:56.017325    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:56.017325    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:56 GMT
	I0229 18:00:56.017325    6380 round_trippers.go:580]     Audit-Id: f35f1d4d-f172-45a2-a361-8d8b092e9866
	I0229 18:00:56.017325    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:56.017325    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:56.017922    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:56.018622    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:56.018687    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:56.018687    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:56.018687    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:56.021270    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:00:56.022127    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:56.022127    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:56.022127    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:56.022127    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:56 GMT
	I0229 18:00:56.022231    6380 round_trippers.go:580]     Audit-Id: a20e3179-0d84-4efa-adac-0bf872151d21
	I0229 18:00:56.022397    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:56.022397    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:56.022890    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:56.023145    6380 pod_ready.go:102] pod "etcd-functional-070600" in "kube-system" namespace has status "Ready":"False"
	I0229 18:00:56.300737    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 18:00:56.300737    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:56.301955    6380 sshutil.go:53] new ssh client: &{IP:172.26.52.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-070600\id_rsa Username:docker}
	I0229 18:00:56.437247    6380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:00:56.503865    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:56.503973    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:56.503973    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:56.504034    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:56.507554    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:00:56.507904    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:56.507904    6380 round_trippers.go:580]     Audit-Id: f2f572d2-0ccc-4671-b227-a4b1e6ae2228
	I0229 18:00:56.507904    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:56.507961    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:56.507961    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:56.507961    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:56.507961    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:56 GMT
	I0229 18:00:56.508176    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:56.508875    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:56.508928    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:56.508928    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:56.508928    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:56.511053    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:00:56.511053    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:56.511053    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:56.511053    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:56.511841    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:56.511841    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:56 GMT
	I0229 18:00:56.511841    6380 round_trippers.go:580]     Audit-Id: fbb95f1d-e3ff-45af-9a86-11a7d6ed6912
	I0229 18:00:56.511841    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:56.512014    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:57.012374    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:57.012444    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:57.012444    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:57.012444    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:57.015193    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:00:57.015193    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:57.015193    6380 round_trippers.go:580]     Audit-Id: cf91813f-81ff-4e23-b37a-6d7b7945ef30
	I0229 18:00:57.015193    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:57.015193    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:57.015193    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:57.015193    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:57.015193    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:57 GMT
	I0229 18:00:57.016548    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:57.016580    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:57.017120    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:57.017120    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:57.017120    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:57.019777    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:00:57.019777    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:57.019777    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:57.019777    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:57 GMT
	I0229 18:00:57.019777    6380 round_trippers.go:580]     Audit-Id: d7b5405f-82f7-4118-8cef-085e2292a889
	I0229 18:00:57.019777    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:57.019777    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:57.019777    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:57.019777    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:57.487491    6380 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0229 18:00:57.487491    6380 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0229 18:00:57.487491    6380 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0229 18:00:57.487491    6380 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0229 18:00:57.487491    6380 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0229 18:00:57.487491    6380 command_runner.go:130] > pod/storage-provisioner configured
	I0229 18:00:57.487491    6380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.0501857s)
	I0229 18:00:57.504447    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:57.504523    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:57.504523    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:57.504523    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:57.510561    6380 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 18:00:57.510561    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:57.510561    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:57.510561    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:57.510561    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:57.510561    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:57 GMT
	I0229 18:00:57.510561    6380 round_trippers.go:580]     Audit-Id: b667d1e2-bce6-4c0c-a5d3-ff7e5089e63c
	I0229 18:00:57.510561    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:57.510561    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:57.511407    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:57.511407    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:57.511407    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:57.511407    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:57.515038    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:00:57.515038    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:57.515038    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:57.515038    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:57 GMT
	I0229 18:00:57.515038    6380 round_trippers.go:580]     Audit-Id: a26e10f5-46d6-487e-8a0e-abc30a7fa280
	I0229 18:00:57.515038    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:57.515038    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:57.515038    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:57.515038    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:58.010352    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:58.010352    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:58.010352    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:58.010352    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:58.014836    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:00:58.014836    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:58.014836    6380 round_trippers.go:580]     Audit-Id: 6cdd3343-d152-4a35-a405-38d3a62d2159
	I0229 18:00:58.014836    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:58.014836    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:58.014836    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:58.014836    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:58.014836    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:58 GMT
	I0229 18:00:58.014836    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:58.015765    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:58.015845    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:58.015845    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:58.015845    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:58.018554    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:00:58.018554    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:58.018676    6380 round_trippers.go:580]     Audit-Id: 7c2e51b9-deb5-43a9-9c4c-a9fe42da9598
	I0229 18:00:58.018676    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:58.018676    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:58.018676    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:58.018676    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:58.018676    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:58 GMT
	I0229 18:00:58.018832    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:58.385034    6380 main.go:141] libmachine: [stdout =====>] : 172.26.52.106
	
	I0229 18:00:58.385034    6380 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:00:58.385034    6380 sshutil.go:53] new ssh client: &{IP:172.26.52.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-070600\id_rsa Username:docker}
	I0229 18:00:58.514680    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:58.514738    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:58.514771    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:58.514771    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:58.522654    6380 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 18:00:58.522654    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:58.522654    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:58 GMT
	I0229 18:00:58.522654    6380 round_trippers.go:580]     Audit-Id: 784fefa8-336a-413a-b758-ef5220088a81
	I0229 18:00:58.522654    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:58.522654    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:58.522654    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:58.522654    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:58.522654    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:58.522654    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:58.522654    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:58.522654    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:58.522654    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:58.527889    6380 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 18:00:58.528312    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:58.528312    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:58.528405    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:58 GMT
	I0229 18:00:58.528469    6380 round_trippers.go:580]     Audit-Id: 0d734244-1594-4f09-a4e3-84ed501549c9
	I0229 18:00:58.528469    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:58.528469    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:58.528469    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:58.528469    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:58.529209    6380 pod_ready.go:102] pod "etcd-functional-070600" in "kube-system" namespace has status "Ready":"False"
	I0229 18:00:58.538374    6380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 18:00:58.797685    6380 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0229 18:00:58.797685    6380 round_trippers.go:463] GET https://172.26.52.106:8441/apis/storage.k8s.io/v1/storageclasses
	I0229 18:00:58.797685    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:58.797685    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:58.797685    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:58.804426    6380 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 18:00:58.804426    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:58.804426    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:58 GMT
	I0229 18:00:58.804426    6380 round_trippers.go:580]     Audit-Id: be39ab10-b615-4aa6-9851-14164daae1c6
	I0229 18:00:58.804426    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:58.804426    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:58.804955    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:58.804955    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:58.804955    6380 round_trippers.go:580]     Content-Length: 1273
	I0229 18:00:58.805029    6380 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"594"},"items":[{"metadata":{"name":"standard","uid":"5bc15f41-8ac4-4ed0-8fdd-04228140718d","resourceVersion":"420","creationTimestamp":"2024-02-29T17:58:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T17:58:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0229 18:00:58.805726    6380 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5bc15f41-8ac4-4ed0-8fdd-04228140718d","resourceVersion":"420","creationTimestamp":"2024-02-29T17:58:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T17:58:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0229 18:00:58.805836    6380 round_trippers.go:463] PUT https://172.26.52.106:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0229 18:00:58.805836    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:58.805836    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:58.805836    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:58.805836    6380 round_trippers.go:473]     Content-Type: application/json
	I0229 18:00:58.809405    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:00:58.810321    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:58.810321    6380 round_trippers.go:580]     Content-Length: 1220
	I0229 18:00:58.810321    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:58 GMT
	I0229 18:00:58.810321    6380 round_trippers.go:580]     Audit-Id: 78e827e2-43cf-4513-aa08-7874408f17c3
	I0229 18:00:58.810321    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:58.810321    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:58.810321    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:58.810321    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:58.810321    6380 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5bc15f41-8ac4-4ed0-8fdd-04228140718d","resourceVersion":"420","creationTimestamp":"2024-02-29T17:58:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T17:58:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0229 18:00:58.811540    6380 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 18:00:58.811540    6380 addons.go:505] enable addons completed in 8.9957012s: enabled=[storage-provisioner default-storageclass]
	I0229 18:00:59.004063    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:59.004063    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:59.004063    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:59.004063    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:59.007669    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:00:59.007669    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:59.007669    6380 round_trippers.go:580]     Audit-Id: 5e8d31d1-1aee-4355-be5c-0858a52ad510
	I0229 18:00:59.007669    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:59.007669    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:59.007669    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:59.007669    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:59.008373    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:59 GMT
	I0229 18:00:59.008532    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:59.009386    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:59.009461    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:59.009461    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:59.009461    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:59.014166    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:59.014166    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:59.014166    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:59.014166    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:59.014166    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:59.014166    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:59 GMT
	I0229 18:00:59.014166    6380 round_trippers.go:580]     Audit-Id: 501e01ec-1f18-4023-bc87-ef73f454cbe3
	I0229 18:00:59.014166    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:59.014166    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:00:59.503242    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:00:59.503335    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:59.503335    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:59.503335    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:59.510437    6380 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 18:00:59.510437    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:59.510437    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:59.510437    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:59.510437    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:59.510437    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:59 GMT
	I0229 18:00:59.510437    6380 round_trippers.go:580]     Audit-Id: 8ec299d8-1953-4132-90a5-175bb4591e6c
	I0229 18:00:59.510437    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:59.510437    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"528","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6300 chars]
	I0229 18:00:59.511839    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:00:59.511839    6380 round_trippers.go:469] Request Headers:
	I0229 18:00:59.511839    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:00:59.511839    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:00:59.516119    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:00:59.516258    6380 round_trippers.go:577] Response Headers:
	I0229 18:00:59.516258    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:00:59.516258    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:00:59.516258    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:00:59 GMT
	I0229 18:00:59.516258    6380 round_trippers.go:580]     Audit-Id: fffe36b0-dcfc-4ec6-80b8-499f81ec3d85
	I0229 18:00:59.516333    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:00:59.516333    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:00:59.516465    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:01:00.003507    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/etcd-functional-070600
	I0229 18:01:00.003597    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:00.003597    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:00.003597    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:00.008690    6380 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 18:01:00.008690    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:00.008690    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:00.008690    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:00 GMT
	I0229 18:01:00.008690    6380 round_trippers.go:580]     Audit-Id: 603a7c3f-8e55-4260-b5f0-1eb0120f1319
	I0229 18:01:00.008690    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:00.008690    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:00.008690    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:00.008690    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-070600","namespace":"kube-system","uid":"15ee6939-45d3-4680-9f03-ce44934af9d6","resourceVersion":"597","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.106:2379","kubernetes.io/config.hash":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.mirror":"7c6cfa9b9829804068e32ca2ae30321e","kubernetes.io/config.seen":"2024-02-29T17:58:21.000515208Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6076 chars]
	I0229 18:01:00.009689    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:01:00.009689    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:00.009689    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:00.009689    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:00.016890    6380 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 18:01:00.016890    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:00.016890    6380 round_trippers.go:580]     Audit-Id: 9de79812-c33c-4262-b4d4-06b7ffea1b8e
	I0229 18:01:00.017104    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:00.017104    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:00.017104    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:00.017104    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:00.017104    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:00 GMT
	I0229 18:01:00.017350    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:01:00.017713    6380 pod_ready.go:92] pod "etcd-functional-070600" in "kube-system" namespace has status "Ready":"True"
	I0229 18:01:00.017713    6380 pod_ready.go:81] duration metric: took 10.0165177s waiting for pod "etcd-functional-070600" in "kube-system" namespace to be "Ready" ...
	I0229 18:01:00.017713    6380 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-070600" in "kube-system" namespace to be "Ready" ...
	I0229 18:01:00.017713    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-070600
	I0229 18:01:00.017713    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:00.017713    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:00.017713    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:00.020583    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:01:00.020583    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:00.020583    6380 round_trippers.go:580]     Audit-Id: 189733a1-6024-44b2-bcfc-7f8616473c93
	I0229 18:01:00.020583    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:00.020583    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:00.020583    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:00.020583    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:00.020583    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:00 GMT
	I0229 18:01:00.020583    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-070600","namespace":"kube-system","uid":"65eef792-eafc-48a8-8865-f7f01371fa6e","resourceVersion":"529","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.52.106:8441","kubernetes.io/config.hash":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.mirror":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.seen":"2024-02-29T17:58:21.000523108Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7853 chars]
	I0229 18:01:00.021578    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:01:00.021578    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:00.021578    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:00.021578    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:00.024171    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:01:00.024731    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:00.024731    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:00.024731    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:00.024731    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:00.024731    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:00.024731    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:00 GMT
	I0229 18:01:00.024731    6380 round_trippers.go:580]     Audit-Id: 70406a80-6a62-4cde-9e99-f09b1c5186b0
	I0229 18:01:00.024927    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:01:00.519493    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-070600
	I0229 18:01:00.519565    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:00.519565    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:00.519565    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:00.526153    6380 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 18:01:00.526153    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:00.526153    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:00.526153    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:00.526153    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:00.526153    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:00 GMT
	I0229 18:01:00.526153    6380 round_trippers.go:580]     Audit-Id: 9af99117-f5d8-464c-a611-1fbe5efff712
	I0229 18:01:00.526153    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:00.527218    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-070600","namespace":"kube-system","uid":"65eef792-eafc-48a8-8865-f7f01371fa6e","resourceVersion":"529","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.52.106:8441","kubernetes.io/config.hash":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.mirror":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.seen":"2024-02-29T17:58:21.000523108Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7853 chars]
	I0229 18:01:00.528259    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:01:00.528320    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:00.528387    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:00.528387    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:00.536376    6380 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 18:01:00.536376    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:00.536376    6380 round_trippers.go:580]     Audit-Id: b4893515-ba13-410f-a45f-4d842be49f58
	I0229 18:01:00.536376    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:00.536376    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:00.536376    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:00.536376    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:00.536376    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:00 GMT
	I0229 18:01:00.536376    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:01:01.022317    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-070600
	I0229 18:01:01.022388    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:01.022388    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:01.022388    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:01.025733    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:01:01.026448    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:01.026448    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:01.026448    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:01 GMT
	I0229 18:01:01.026448    6380 round_trippers.go:580]     Audit-Id: 6bdd92f4-d1ac-4681-9676-352e8595bdee
	I0229 18:01:01.026448    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:01.026448    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:01.026448    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:01.026824    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-070600","namespace":"kube-system","uid":"65eef792-eafc-48a8-8865-f7f01371fa6e","resourceVersion":"529","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.52.106:8441","kubernetes.io/config.hash":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.mirror":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.seen":"2024-02-29T17:58:21.000523108Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7853 chars]
	I0229 18:01:01.027761    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:01:01.027761    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:01.027761    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:01.027855    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:01.031414    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:01:01.031414    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:01.031414    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:01.031414    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:01.031488    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:01.031488    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:01 GMT
	I0229 18:01:01.031488    6380 round_trippers.go:580]     Audit-Id: dd5eb877-054a-4c1f-b70d-a9689cfd64f3
	I0229 18:01:01.031488    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:01.031488    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:01:01.521597    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-070600
	I0229 18:01:01.521730    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:01.521730    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:01.521730    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:01.525167    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:01:01.526233    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:01.526233    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:01.526233    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:01.526233    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:01.526233    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:01.526233    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:01 GMT
	I0229 18:01:01.526233    6380 round_trippers.go:580]     Audit-Id: 25e04a86-70ae-4fb0-852a-0c9de27cf362
	I0229 18:01:01.526525    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-070600","namespace":"kube-system","uid":"65eef792-eafc-48a8-8865-f7f01371fa6e","resourceVersion":"529","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.52.106:8441","kubernetes.io/config.hash":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.mirror":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.seen":"2024-02-29T17:58:21.000523108Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7853 chars]
	I0229 18:01:01.527647    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:01:01.527738    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:01.527738    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:01.527738    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:01.535849    6380 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 18:01:01.535849    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:01.535849    6380 round_trippers.go:580]     Audit-Id: 3d2e3dbd-1189-4c2b-aae7-f1182f61401c
	I0229 18:01:01.535849    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:01.535849    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:01.535849    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:01.536409    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:01.536409    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:01 GMT
	I0229 18:01:01.536580    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:01:02.020095    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-070600
	I0229 18:01:02.020169    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:02.020169    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:02.020169    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:02.024505    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:01:02.024505    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:02.024505    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:02.025328    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:02.025328    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:02.025328    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:02.025328    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:02 GMT
	I0229 18:01:02.025328    6380 round_trippers.go:580]     Audit-Id: 7ebb4b4b-8d96-4a37-b120-c7c71e2abfc7
	I0229 18:01:02.025552    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-070600","namespace":"kube-system","uid":"65eef792-eafc-48a8-8865-f7f01371fa6e","resourceVersion":"529","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.52.106:8441","kubernetes.io/config.hash":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.mirror":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.seen":"2024-02-29T17:58:21.000523108Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7853 chars]
	I0229 18:01:02.026309    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:01:02.026406    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:02.026406    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:02.026406    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:02.029632    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:01:02.029632    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:02.030517    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:02 GMT
	I0229 18:01:02.030517    6380 round_trippers.go:580]     Audit-Id: 5ecad0b2-9dae-47f8-9962-0df153d85201
	I0229 18:01:02.030517    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:02.030517    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:02.030517    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:02.030517    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:02.030832    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:01:02.031486    6380 pod_ready.go:102] pod "kube-apiserver-functional-070600" in "kube-system" namespace has status "Ready":"False"
	I0229 18:01:02.522310    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-070600
	I0229 18:01:02.522387    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:02.522387    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:02.522387    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:02.526639    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:01:02.526639    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:02.526639    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:02 GMT
	I0229 18:01:02.526639    6380 round_trippers.go:580]     Audit-Id: e11526c2-5954-4a03-bd0b-9bdaaaaa3d9b
	I0229 18:01:02.526639    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:02.526639    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:02.526639    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:02.526639    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:02.527458    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-070600","namespace":"kube-system","uid":"65eef792-eafc-48a8-8865-f7f01371fa6e","resourceVersion":"529","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.52.106:8441","kubernetes.io/config.hash":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.mirror":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.seen":"2024-02-29T17:58:21.000523108Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7853 chars]
	I0229 18:01:02.528061    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:01:02.528061    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:02.528061    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:02.528061    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:02.531650    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:01:02.532409    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:02.532409    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:02.532409    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:02 GMT
	I0229 18:01:02.532409    6380 round_trippers.go:580]     Audit-Id: 9307e5b6-f4ce-4dd5-96a9-068437784e26
	I0229 18:01:02.532409    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:02.532409    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:02.532409    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:02.532564    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:01:03.024103    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-070600
	I0229 18:01:03.024225    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:03.024225    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:03.024225    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:03.027575    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:01:03.027575    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:03.027575    6380 round_trippers.go:580]     Audit-Id: 4396f143-d8d6-42d3-a111-795b167ae1fc
	I0229 18:01:03.027575    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:03.027575    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:03.027575    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:03.027575    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:03.027575    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:03 GMT
	I0229 18:01:03.027575    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-070600","namespace":"kube-system","uid":"65eef792-eafc-48a8-8865-f7f01371fa6e","resourceVersion":"529","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.52.106:8441","kubernetes.io/config.hash":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.mirror":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.seen":"2024-02-29T17:58:21.000523108Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7853 chars]
	I0229 18:01:03.028524    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:01:03.028524    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:03.028524    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:03.028524    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:03.031523    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:01:03.031523    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:03.031523    6380 round_trippers.go:580]     Audit-Id: 540dae0e-a131-4d8d-a18c-8e0925889525
	I0229 18:01:03.031523    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:03.031523    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:03.031523    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:03.032176    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:03.032200    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:03 GMT
	I0229 18:01:03.032459    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:01:03.518652    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-070600
	I0229 18:01:03.518652    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:03.518729    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:03.518729    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:03.526272    6380 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 18:01:03.526272    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:03.526272    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:03.526272    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:03 GMT
	I0229 18:01:03.526272    6380 round_trippers.go:580]     Audit-Id: b1b81629-4365-43d0-aee4-bdf68eae67dd
	I0229 18:01:03.526272    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:03.526272    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:03.526272    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:03.526272    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-070600","namespace":"kube-system","uid":"65eef792-eafc-48a8-8865-f7f01371fa6e","resourceVersion":"529","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.52.106:8441","kubernetes.io/config.hash":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.mirror":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.seen":"2024-02-29T17:58:21.000523108Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7853 chars]
	I0229 18:01:03.527364    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:01:03.527412    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:03.527412    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:03.527412    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:03.530000    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:01:03.530000    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:03.530000    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:03.530000    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:03.530000    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:03 GMT
	I0229 18:01:03.530000    6380 round_trippers.go:580]     Audit-Id: 648e2578-6894-4b74-8659-65aaa1e44db2
	I0229 18:01:03.530000    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:03.530000    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:03.530000    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:01:04.024943    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-070600
	I0229 18:01:04.025031    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:04.025031    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:04.025031    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:04.031471    6380 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 18:01:04.031471    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:04.031471    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:04.031471    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:04.031471    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:04.031471    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:04 GMT
	I0229 18:01:04.031471    6380 round_trippers.go:580]     Audit-Id: 3b188d43-4a3f-42dc-9f75-284a3b0890e3
	I0229 18:01:04.031660    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:04.031841    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-070600","namespace":"kube-system","uid":"65eef792-eafc-48a8-8865-f7f01371fa6e","resourceVersion":"602","creationTimestamp":"2024-02-29T17:58:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.52.106:8441","kubernetes.io/config.hash":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.mirror":"9a2beeb9e2bab428fb4cc9e292b35be5","kubernetes.io/config.seen":"2024-02-29T17:58:21.000523108Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7609 chars]
	I0229 18:01:04.032498    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:01:04.032498    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:04.032498    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:04.032498    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:04.035775    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:01:04.035775    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:04.035775    6380 round_trippers.go:580]     Audit-Id: 4759cbfb-ac8b-4332-a7f7-f6e63f3a6b52
	I0229 18:01:04.035775    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:04.035775    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:04.035775    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:04.035775    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:04.035775    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:04 GMT
	I0229 18:01:04.035775    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:01:04.036775    6380 pod_ready.go:92] pod "kube-apiserver-functional-070600" in "kube-system" namespace has status "Ready":"True"
	I0229 18:01:04.036865    6380 pod_ready.go:81] duration metric: took 4.0189294s waiting for pod "kube-apiserver-functional-070600" in "kube-system" namespace to be "Ready" ...
	I0229 18:01:04.036865    6380 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-070600" in "kube-system" namespace to be "Ready" ...
	I0229 18:01:04.036962    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-070600
	I0229 18:01:04.036962    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:04.037043    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:04.037043    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:04.039669    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:01:04.039669    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:04.039669    6380 round_trippers.go:580]     Audit-Id: e469f719-d528-495b-b8c8-37571a63aaad
	I0229 18:01:04.039669    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:04.039669    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:04.039669    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:04.039669    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:04.039669    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:04 GMT
	I0229 18:01:04.039669    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-070600","namespace":"kube-system","uid":"36899a27-3284-4e92-9288-866d7a3c97ba","resourceVersion":"591","creationTimestamp":"2024-02-29T17:58:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"2af6b4e0ecac3f11c253aa4c6679e08e","kubernetes.io/config.mirror":"2af6b4e0ecac3f11c253aa4c6679e08e","kubernetes.io/config.seen":"2024-02-29T17:58:12.949596306Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7177 chars]
	I0229 18:01:04.040842    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:01:04.040884    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:04.040917    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:04.040917    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:04.044194    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:01:04.044194    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:04.044194    6380 round_trippers.go:580]     Audit-Id: 1d5bed5c-589f-469c-b9f4-3613c02d05e4
	I0229 18:01:04.044194    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:04.044194    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:04.044194    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:04.044194    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:04.044194    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:04 GMT
	I0229 18:01:04.044194    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:01:04.045073    6380 pod_ready.go:92] pod "kube-controller-manager-functional-070600" in "kube-system" namespace has status "Ready":"True"
	I0229 18:01:04.045073    6380 pod_ready.go:81] duration metric: took 8.2077ms waiting for pod "kube-controller-manager-functional-070600" in "kube-system" namespace to be "Ready" ...
	I0229 18:01:04.045073    6380 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wj6dl" in "kube-system" namespace to be "Ready" ...
	I0229 18:01:04.045073    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/kube-proxy-wj6dl
	I0229 18:01:04.045073    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:04.045073    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:04.045073    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:04.048658    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:01:04.048658    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:04.048808    6380 round_trippers.go:580]     Audit-Id: 9c6e32d4-2564-4468-a3dc-8ade3ac6087f
	I0229 18:01:04.048808    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:04.048808    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:04.048808    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:04.048808    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:04.048808    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:04 GMT
	I0229 18:01:04.049009    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-wj6dl","generateName":"kube-proxy-","namespace":"kube-system","uid":"f2beec7d-0917-4c10-bbe6-303accd46692","resourceVersion":"525","creationTimestamp":"2024-02-29T17:58:33Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"753f55d2-5641-4e5a-b3e3-06bc9c87ae96","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"753f55d2-5641-4e5a-b3e3-06bc9c87ae96\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5740 chars]
	I0229 18:01:04.049561    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:01:04.049561    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:04.049561    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:04.049561    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:04.053019    6380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:01:04.053019    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:04.053019    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:04.053019    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:04.053019    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:04.053019    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:04.053019    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:04 GMT
	I0229 18:01:04.053019    6380 round_trippers.go:580]     Audit-Id: 92d5d879-15f8-453b-86b1-5d7b3c73835b
	I0229 18:01:04.053294    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:01:04.053294    6380 pod_ready.go:92] pod "kube-proxy-wj6dl" in "kube-system" namespace has status "Ready":"True"
	I0229 18:01:04.053294    6380 pod_ready.go:81] duration metric: took 8.2205ms waiting for pod "kube-proxy-wj6dl" in "kube-system" namespace to be "Ready" ...
	I0229 18:01:04.053294    6380 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-070600" in "kube-system" namespace to be "Ready" ...
	I0229 18:01:04.053294    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-070600
	I0229 18:01:04.053294    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:04.053850    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:04.053850    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:04.056077    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:01:04.056367    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:04.056367    6380 round_trippers.go:580]     Audit-Id: 208de9c9-93cd-468e-a9cc-b0ec6cecc0e2
	I0229 18:01:04.056367    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:04.056367    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:04.056367    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:04.056367    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:04.056367    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:04 GMT
	I0229 18:01:04.056367    6380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-070600","namespace":"kube-system","uid":"c6d69b9e-84ea-4827-bc4b-2a9387081024","resourceVersion":"593","creationTimestamp":"2024-02-29T17:58:19Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"caaf0d3ee789deca90597564c987b2bd","kubernetes.io/config.mirror":"caaf0d3ee789deca90597564c987b2bd","kubernetes.io/config.seen":"2024-02-29T17:58:12.949597406Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4907 chars]
	I0229 18:01:04.056977    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes/functional-070600
	I0229 18:01:04.057051    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:04.057051    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:04.057051    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:04.059234    6380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:01:04.060238    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:04.060238    6380 round_trippers.go:580]     Audit-Id: 872cee0f-802a-4900-992e-0e89b431e26b
	I0229 18:01:04.060238    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:04.060280    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:04.060280    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:04.060280    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:04.060280    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:04 GMT
	I0229 18:01:04.060545    6380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-02-29T17:58:17Z","fieldsType":"FieldsV1", [truncated 4786 chars]
	I0229 18:01:04.060545    6380 pod_ready.go:92] pod "kube-scheduler-functional-070600" in "kube-system" namespace has status "Ready":"True"
	I0229 18:01:04.060545    6380 pod_ready.go:81] duration metric: took 7.2506ms waiting for pod "kube-scheduler-functional-070600" in "kube-system" namespace to be "Ready" ...
	I0229 18:01:04.060545    6380 pod_ready.go:38] duration metric: took 14.0776955s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:01:04.060545    6380 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:01:04.072854    6380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:01:04.098118    6380 command_runner.go:130] > 6471
	I0229 18:01:04.098205    6380 api_server.go:72] duration metric: took 14.2547939s to wait for apiserver process to appear ...
	I0229 18:01:04.098205    6380 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:01:04.098205    6380 api_server.go:253] Checking apiserver healthz at https://172.26.52.106:8441/healthz ...
	I0229 18:01:04.105588    6380 api_server.go:279] https://172.26.52.106:8441/healthz returned 200:
	ok
	I0229 18:01:04.105848    6380 round_trippers.go:463] GET https://172.26.52.106:8441/version
	I0229 18:01:04.105930    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:04.105930    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:04.105930    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:04.107707    6380 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:01:04.107707    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:04.107707    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:04.107707    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:04.107707    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:04.107707    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:04.107707    6380 round_trippers.go:580]     Content-Length: 264
	I0229 18:01:04.107707    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:04 GMT
	I0229 18:01:04.107707    6380 round_trippers.go:580]     Audit-Id: e467d1a9-6f9e-433f-8226-f9acf8cbc1e2
	I0229 18:01:04.107707    6380 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 18:01:04.107707    6380 api_server.go:141] control plane version: v1.28.4
	I0229 18:01:04.107707    6380 api_server.go:131] duration metric: took 9.5017ms to wait for apiserver health ...
	I0229 18:01:04.107707    6380 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:01:04.107707    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods
	I0229 18:01:04.107707    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:04.107707    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:04.107707    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:04.112391    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:01:04.112391    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:04.112391    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:04 GMT
	I0229 18:01:04.112391    6380 round_trippers.go:580]     Audit-Id: 5bf9dc57-5975-4a39-894e-7563c9212886
	I0229 18:01:04.112391    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:04.112391    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:04.112391    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:04.112391    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:04.114213    6380 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"602"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rlkxp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0537edd-1cdc-4d52-9c2a-743c59b3d0a1","resourceVersion":"526","creationTimestamp":"2024-02-29T17:58:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0cbc3bda-77fb-4b51-90c0-24cd4a31cc19","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0cbc3bda-77fb-4b51-90c0-24cd4a31cc19\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 48031 chars]
	I0229 18:01:04.116408    6380 system_pods.go:59] 7 kube-system pods found
	I0229 18:01:04.116408    6380 system_pods.go:61] "coredns-5dd5756b68-rlkxp" [c0537edd-1cdc-4d52-9c2a-743c59b3d0a1] Running
	I0229 18:01:04.116408    6380 system_pods.go:61] "etcd-functional-070600" [15ee6939-45d3-4680-9f03-ce44934af9d6] Running
	I0229 18:01:04.116408    6380 system_pods.go:61] "kube-apiserver-functional-070600" [65eef792-eafc-48a8-8865-f7f01371fa6e] Running
	I0229 18:01:04.116408    6380 system_pods.go:61] "kube-controller-manager-functional-070600" [36899a27-3284-4e92-9288-866d7a3c97ba] Running
	I0229 18:01:04.116408    6380 system_pods.go:61] "kube-proxy-wj6dl" [f2beec7d-0917-4c10-bbe6-303accd46692] Running
	I0229 18:01:04.116408    6380 system_pods.go:61] "kube-scheduler-functional-070600" [c6d69b9e-84ea-4827-bc4b-2a9387081024] Running
	I0229 18:01:04.116408    6380 system_pods.go:61] "storage-provisioner" [578ab2cc-0eab-4572-8d30-0cabd99bfa92] Running
	I0229 18:01:04.116408    6380 system_pods.go:74] duration metric: took 8.7007ms to wait for pod list to return data ...
	I0229 18:01:04.116513    6380 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:01:04.116601    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/default/serviceaccounts
	I0229 18:01:04.116601    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:04.116601    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:04.116601    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:04.122391    6380 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 18:01:04.122391    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:04.122391    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:04.122391    6380 round_trippers.go:580]     Content-Length: 261
	I0229 18:01:04.122391    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:04 GMT
	I0229 18:01:04.122391    6380 round_trippers.go:580]     Audit-Id: 8522353d-4bba-4ada-bd9c-489645aa886a
	I0229 18:01:04.122391    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:04.122391    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:04.123063    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:04.123063    6380 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"602"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"aa3aab3d-e0a1-4909-b233-01f23bb41f8a","resourceVersion":"340","creationTimestamp":"2024-02-29T17:58:33Z"}}]}
	I0229 18:01:04.124401    6380 default_sa.go:45] found service account: "default"
	I0229 18:01:04.124471    6380 default_sa.go:55] duration metric: took 7.9575ms for default service account to be created ...
	I0229 18:01:04.124471    6380 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:01:04.225875    6380 request.go:629] Waited for 101.095ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods
	I0229 18:01:04.225954    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/namespaces/kube-system/pods
	I0229 18:01:04.226028    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:04.226028    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:04.226028    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:04.230560    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:01:04.230560    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:04.230560    6380 round_trippers.go:580]     Audit-Id: 5c665747-774f-47f7-b270-7ffb38540785
	I0229 18:01:04.230560    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:04.230560    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:04.230560    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:04.230560    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:04.230560    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:04 GMT
	I0229 18:01:04.232407    6380 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"602"},"items":[{"metadata":{"name":"coredns-5dd5756b68-rlkxp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c0537edd-1cdc-4d52-9c2a-743c59b3d0a1","resourceVersion":"526","creationTimestamp":"2024-02-29T17:58:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"0cbc3bda-77fb-4b51-90c0-24cd4a31cc19","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T17:58:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0cbc3bda-77fb-4b51-90c0-24cd4a31cc19\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 48031 chars]
	I0229 18:01:04.234541    6380 system_pods.go:86] 7 kube-system pods found
	I0229 18:01:04.234626    6380 system_pods.go:89] "coredns-5dd5756b68-rlkxp" [c0537edd-1cdc-4d52-9c2a-743c59b3d0a1] Running
	I0229 18:01:04.234626    6380 system_pods.go:89] "etcd-functional-070600" [15ee6939-45d3-4680-9f03-ce44934af9d6] Running
	I0229 18:01:04.234626    6380 system_pods.go:89] "kube-apiserver-functional-070600" [65eef792-eafc-48a8-8865-f7f01371fa6e] Running
	I0229 18:01:04.234626    6380 system_pods.go:89] "kube-controller-manager-functional-070600" [36899a27-3284-4e92-9288-866d7a3c97ba] Running
	I0229 18:01:04.234626    6380 system_pods.go:89] "kube-proxy-wj6dl" [f2beec7d-0917-4c10-bbe6-303accd46692] Running
	I0229 18:01:04.234626    6380 system_pods.go:89] "kube-scheduler-functional-070600" [c6d69b9e-84ea-4827-bc4b-2a9387081024] Running
	I0229 18:01:04.234626    6380 system_pods.go:89] "storage-provisioner" [578ab2cc-0eab-4572-8d30-0cabd99bfa92] Running
	I0229 18:01:04.234626    6380 system_pods.go:126] duration metric: took 110.1488ms to wait for k8s-apps to be running ...
	I0229 18:01:04.234711    6380 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:01:04.242584    6380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:01:04.270508    6380 system_svc.go:56] duration metric: took 35.7958ms WaitForService to wait for kubelet.
	I0229 18:01:04.270580    6380 kubeadm.go:581] duration metric: took 14.4271593s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:01:04.270580    6380 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:01:04.429567    6380 request.go:629] Waited for 158.8914ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.106:8441/api/v1/nodes
	I0229 18:01:04.429772    6380 round_trippers.go:463] GET https://172.26.52.106:8441/api/v1/nodes
	I0229 18:01:04.429772    6380 round_trippers.go:469] Request Headers:
	I0229 18:01:04.429772    6380 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:01:04.429772    6380 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:01:04.434175    6380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:01:04.434256    6380 round_trippers.go:577] Response Headers:
	I0229 18:01:04.434256    6380 round_trippers.go:580]     Audit-Id: ec76e4be-e2ef-4301-a7b1-bc57cd2ac0b4
	I0229 18:01:04.434256    6380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:01:04.434256    6380 round_trippers.go:580]     Content-Type: application/json
	I0229 18:01:04.434256    6380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c056bed7-5b97-481d-aa11-3525d274c983
	I0229 18:01:04.434366    6380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: abc37225-32be-45f6-9ac9-ad5e4e3cff69
	I0229 18:01:04.434366    6380 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:01:04 GMT
	I0229 18:01:04.434588    6380 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"602"},"items":[{"metadata":{"name":"functional-070600","uid":"e0eab700-e4dd-497c-a050-39945e5cbad5","resourceVersion":"416","creationTimestamp":"2024-02-29T17:58:17Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-070600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"functional-070600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T17_58_20_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4839 chars]
	I0229 18:01:04.434588    6380 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:01:04.434588    6380 node_conditions.go:123] node cpu capacity is 2
	I0229 18:01:04.435117    6380 node_conditions.go:105] duration metric: took 163.9992ms to run NodePressure ...
	I0229 18:01:04.435117    6380 start.go:228] waiting for startup goroutines ...
	I0229 18:01:04.435117    6380 start.go:233] waiting for cluster config update ...
	I0229 18:01:04.435117    6380 start.go:242] writing updated cluster config ...
	I0229 18:01:04.443835    6380 ssh_runner.go:195] Run: rm -f paused
	I0229 18:01:04.566007    6380 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:01:04.566857    6380 out.go:177] * Done! kubectl is now configured to use "functional-070600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.527945458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.527959259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.528161672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.563400113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.564529688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.564709500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.565035921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.623274889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.623434900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.623454501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.626530006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.653857121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.653915224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.653927725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:00:44 functional-070600 dockerd[5695]: time="2024-02-29T18:00:44.654031932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:00:44 functional-070600 cri-dockerd[5904]: time="2024-02-29T18:00:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b35016540762f47dbd525544a82f514a7843d55f4c7a3ccf3d1d0a7587b26e6e/resolv.conf as [nameserver 172.26.48.1]"
	Feb 29 18:00:45 functional-070600 cri-dockerd[5904]: time="2024-02-29T18:00:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/98c7f78e900ebe32fa45aed6a74002c4f9d2793675f4c3e4ff964e813723ab94/resolv.conf as [nameserver 172.26.48.1]"
	Feb 29 18:00:45 functional-070600 dockerd[5695]: time="2024-02-29T18:00:45.146533066Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:00:45 functional-070600 dockerd[5695]: time="2024-02-29T18:00:45.147408214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:00:45 functional-070600 dockerd[5695]: time="2024-02-29T18:00:45.155255941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:00:45 functional-070600 dockerd[5695]: time="2024-02-29T18:00:45.174744902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:00:45 functional-070600 dockerd[5695]: time="2024-02-29T18:00:45.266512596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:00:45 functional-070600 dockerd[5695]: time="2024-02-29T18:00:45.266602901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:00:45 functional-070600 dockerd[5695]: time="2024-02-29T18:00:45.266621602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:00:45 functional-070600 dockerd[5695]: time="2024-02-29T18:00:45.266782911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fd1864fb4514a       6e38f40d628db       About a minute ago   Running             storage-provisioner       1                   98c7f78e900eb       storage-provisioner
	87b3a7adb1c2c       ead0a4a53df89       About a minute ago   Running             coredns                   1                   b35016540762f       coredns-5dd5756b68-rlkxp
	18867ea9eda50       83f6cc407eed8       About a minute ago   Running             kube-proxy                1                   02c992545b6fd       kube-proxy-wj6dl
	51076fed92b06       d058aa5ab969c       About a minute ago   Running             kube-controller-manager   1                   bb8ea16dc7b7e       kube-controller-manager-functional-070600
	65cfd946c7658       73deb9a3f7025       About a minute ago   Running             etcd                      1                   6937c7ae95061       etcd-functional-070600
	4bb2802f8c894       e3db313c6dbc0       About a minute ago   Running             kube-scheduler            1                   9470ed5807fa2       kube-scheduler-functional-070600
	f61a756015d9e       7fe0e6f37db33       About a minute ago   Running             kube-apiserver            1                   11db95c329a19       kube-apiserver-functional-070600
	dfbec380db530       6e38f40d628db       3 minutes ago        Exited              storage-provisioner       0                   5dd025d8d203b       storage-provisioner
	55e524bd0d058       83f6cc407eed8       4 minutes ago        Exited              kube-proxy                0                   b187661d6d611       kube-proxy-wj6dl
	0a050dbd789f4       ead0a4a53df89       4 minutes ago        Exited              coredns                   0                   9867d7d9c0830       coredns-5dd5756b68-rlkxp
	456b7c8892b6e       e3db313c6dbc0       4 minutes ago        Exited              kube-scheduler            0                   a82469228eea3       kube-scheduler-functional-070600
	d16496e64413d       7fe0e6f37db33       4 minutes ago        Exited              kube-apiserver            0                   18f9033253c7b       kube-apiserver-functional-070600
	9cd2a3e5d5661       d058aa5ab969c       4 minutes ago        Exited              kube-controller-manager   0                   f64876111f569       kube-controller-manager-functional-070600
	40dd74b3fa735       73deb9a3f7025       4 minutes ago        Exited              etcd                      0                   3c61d613b2300       etcd-functional-070600
	
	
	==> coredns [0a050dbd789f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 09f0998677e0c19d72433bdbc19471218bfe4a8b92405418740861874d1549e73cec4df8f6750d3139464010abec770181315be2b4c8b222ced8b0f05062ec0c
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [87b3a7adb1c2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 09f0998677e0c19d72433bdbc19471218bfe4a8b92405418740861874d1549e73cec4df8f6750d3139464010abec770181315be2b4c8b222ced8b0f05062ec0c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53668 - 7195 "HINFO IN 787670727511017010.2080435216265913962. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.035546431s
	
	
	==> describe nodes <==
	Name:               functional-070600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-070600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=functional-070600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T17_58_20_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 17:58:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-070600
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 18:02:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 18:01:55 +0000   Thu, 29 Feb 2024 17:58:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 18:01:55 +0000   Thu, 29 Feb 2024 17:58:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 18:01:55 +0000   Thu, 29 Feb 2024 17:58:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 18:01:55 +0000   Thu, 29 Feb 2024 17:58:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.52.106
	  Hostname:    functional-070600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912876Ki
	  pods:               110
	System Info:
	  Machine ID:                 2fd1e8d943734e45a9035d8e869d831d
	  System UUID:                f4e1aa55-898f-5746-80a7-37edb79cfc20
	  Boot ID:                    dd0042b1-114c-4493-a935-c6e75f508bdc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-rlkxp                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m6s
	  kube-system                 etcd-functional-070600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-functional-070600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-functional-070600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-proxy-wj6dl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-scheduler-functional-070600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 4m27s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m27s)  kubelet          Node functional-070600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x7 over 4m27s)  kubelet          Node functional-070600 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m27s)  kubelet          Node functional-070600 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node functional-070600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node functional-070600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node functional-070600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m14s                  kubelet          Node functional-070600 status is now: NodeReady
	  Normal  RegisteredNode           4m6s                   node-controller  Node functional-070600 event: Registered Node functional-070600 in Controller
	  Normal  RegisteredNode           99s                    node-controller  Node functional-070600 event: Registered Node functional-070600 in Controller
	
	
	==> dmesg <==
	[  +0.196082] systemd-fstab-generator[967]: Ignoring "noauto" option for root device
	[  +0.233767] systemd-fstab-generator[981]: Ignoring "noauto" option for root device
	[  +1.781070] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +0.211567] systemd-fstab-generator[1150]: Ignoring "noauto" option for root device
	[  +0.193716] systemd-fstab-generator[1162]: Ignoring "noauto" option for root device
	[  +0.255182] systemd-fstab-generator[1177]: Ignoring "noauto" option for root device
	[Feb29 17:58] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.103853] kauditd_printk_skb: 205 callbacks suppressed
	[ +10.580303] systemd-fstab-generator[1662]: Ignoring "noauto" option for root device
	[  +0.101018] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.269313] systemd-fstab-generator[2614]: Ignoring "noauto" option for root device
	[  +0.139851] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.987719] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.680281] kauditd_printk_skb: 39 callbacks suppressed
	[Feb29 18:00] systemd-fstab-generator[5219]: Ignoring "noauto" option for root device
	[  +0.628125] systemd-fstab-generator[5254]: Ignoring "noauto" option for root device
	[  +0.255604] systemd-fstab-generator[5266]: Ignoring "noauto" option for root device
	[  +0.302209] systemd-fstab-generator[5280]: Ignoring "noauto" option for root device
	[  +5.387060] kauditd_printk_skb: 89 callbacks suppressed
	[  +6.809309] systemd-fstab-generator[5854]: Ignoring "noauto" option for root device
	[  +0.213769] systemd-fstab-generator[5865]: Ignoring "noauto" option for root device
	[  +0.195250] systemd-fstab-generator[5877]: Ignoring "noauto" option for root device
	[  +0.291239] systemd-fstab-generator[5892]: Ignoring "noauto" option for root device
	[  +5.447298] kauditd_printk_skb: 118 callbacks suppressed
	[  +5.903143] kauditd_printk_skb: 67 callbacks suppressed
	
	
	==> etcd [40dd74b3fa73] <==
	{"level":"info","ts":"2024-02-29T17:58:14.895756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f2ce23144f4c66b became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T17:58:14.895849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f2ce23144f4c66b received MsgVoteResp from 6f2ce23144f4c66b at term 2"}
	{"level":"info","ts":"2024-02-29T17:58:14.895974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f2ce23144f4c66b became leader at term 2"}
	{"level":"info","ts":"2024-02-29T17:58:14.896113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6f2ce23144f4c66b elected leader 6f2ce23144f4c66b at term 2"}
	{"level":"info","ts":"2024-02-29T17:58:14.90032Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T17:58:14.904313Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6f2ce23144f4c66b","local-member-attributes":"{Name:functional-070600 ClientURLs:[https://172.26.52.106:2379]}","request-path":"/0/members/6f2ce23144f4c66b/attributes","cluster-id":"81d7221589fbc915","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T17:58:14.904518Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T17:58:14.90627Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T17:58:14.909309Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T17:58:14.91059Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.26.52.106:2379"}
	{"level":"info","ts":"2024-02-29T17:58:14.915132Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T17:58:14.915302Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T17:58:14.923227Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"81d7221589fbc915","local-member-id":"6f2ce23144f4c66b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T17:58:14.923342Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T17:58:14.93822Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:00:25.881414Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-29T18:00:25.881461Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-070600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.26.52.106:2380"],"advertise-client-urls":["https://172.26.52.106:2379"]}
	{"level":"warn","ts":"2024-02-29T18:00:25.881559Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:00:25.881634Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:00:25.934613Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.26.52.106:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T18:00:25.934659Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.26.52.106:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-29T18:00:25.936129Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6f2ce23144f4c66b","current-leader-member-id":"6f2ce23144f4c66b"}
	{"level":"info","ts":"2024-02-29T18:00:25.939242Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"172.26.52.106:2380"}
	{"level":"info","ts":"2024-02-29T18:00:25.93943Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"172.26.52.106:2380"}
	{"level":"info","ts":"2024-02-29T18:00:25.939446Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-070600","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.26.52.106:2380"],"advertise-client-urls":["https://172.26.52.106:2379"]}
	
	
	==> etcd [65cfd946c765] <==
	{"level":"info","ts":"2024-02-29T18:00:44.976064Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:00:44.976372Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T18:00:44.975088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f2ce23144f4c66b switched to configuration voters=(8011026538423436907)"}
	{"level":"info","ts":"2024-02-29T18:00:44.978704Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"81d7221589fbc915","local-member-id":"6f2ce23144f4c66b","added-peer-id":"6f2ce23144f4c66b","added-peer-peer-urls":["https://172.26.52.106:2380"]}
	{"level":"info","ts":"2024-02-29T18:00:44.978802Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"81d7221589fbc915","local-member-id":"6f2ce23144f4c66b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:00:44.978831Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:00:44.982936Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T18:00:44.983534Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"6f2ce23144f4c66b","initial-advertise-peer-urls":["https://172.26.52.106:2380"],"listen-peer-urls":["https://172.26.52.106:2380"],"advertise-client-urls":["https://172.26.52.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.26.52.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T18:00:44.983895Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T18:00:44.98432Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.26.52.106:2380"}
	{"level":"info","ts":"2024-02-29T18:00:44.987004Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.26.52.106:2380"}
	{"level":"info","ts":"2024-02-29T18:00:46.398548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f2ce23144f4c66b is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T18:00:46.398824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f2ce23144f4c66b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T18:00:46.399085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f2ce23144f4c66b received MsgPreVoteResp from 6f2ce23144f4c66b at term 2"}
	{"level":"info","ts":"2024-02-29T18:00:46.399226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f2ce23144f4c66b became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T18:00:46.399322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f2ce23144f4c66b received MsgVoteResp from 6f2ce23144f4c66b at term 3"}
	{"level":"info","ts":"2024-02-29T18:00:46.399443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f2ce23144f4c66b became leader at term 3"}
	{"level":"info","ts":"2024-02-29T18:00:46.399574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6f2ce23144f4c66b elected leader 6f2ce23144f4c66b at term 3"}
	{"level":"info","ts":"2024-02-29T18:00:46.403049Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6f2ce23144f4c66b","local-member-attributes":"{Name:functional-070600 ClientURLs:[https://172.26.52.106:2379]}","request-path":"/0/members/6f2ce23144f4c66b/attributes","cluster-id":"81d7221589fbc915","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T18:00:46.403376Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:00:46.40471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T18:00:46.405975Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:00:46.41088Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.26.52.106:2379"}
	{"level":"info","ts":"2024-02-29T18:00:46.422353Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T18:00:46.423278Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:02:40 up 6 min,  0 users,  load average: 0.41, 0.61, 0.31
	Linux functional-070600 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d16496e64413] <==
	W0229 18:00:34.742337       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:34.803855       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:34.873647       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:34.910796       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:34.916992       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:34.931964       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:34.956925       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.061929       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.068327       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.082859       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.083011       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.171985       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.290779       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.323699       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.344106       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.362575       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.385906       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.388400       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.452929       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.521205       1 logging.go:59] [core] [Channel #6 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.589066       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.717517       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.725313       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.736972       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 18:00:35.886823       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f61a756015d9] <==
	I0229 18:00:48.329830       1 aggregator.go:164] waiting for initial CRD sync...
	I0229 18:00:48.330146       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0229 18:00:48.330223       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0229 18:00:48.330304       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0229 18:00:48.394538       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0229 18:00:48.394565       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0229 18:00:48.394573       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 18:00:48.396413       1 aggregator.go:166] initial CRD sync complete...
	I0229 18:00:48.396539       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 18:00:48.396548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 18:00:48.412595       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0229 18:00:48.413046       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0229 18:00:48.422804       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 18:00:48.422973       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 18:00:48.471152       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 18:00:48.491851       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 18:00:48.505760       1 cache.go:39] Caches are synced for autoregister controller
	I0229 18:00:48.521619       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0229 18:00:48.521650       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0229 18:00:48.522251       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 18:00:48.529941       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 18:00:49.330662       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0229 18:00:49.746602       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.26.52.106]
	I0229 18:00:49.748298       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 18:00:49.758662       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [51076fed92b0] <==
	I0229 18:01:00.797728       1 shared_informer.go:318] Caches are synced for PVC protection
	I0229 18:01:00.801088       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0229 18:01:00.801449       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0229 18:01:00.801874       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="250.013µs"
	I0229 18:01:00.804383       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0229 18:01:00.806855       1 shared_informer.go:318] Caches are synced for cronjob
	I0229 18:01:00.808405       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0229 18:01:00.864138       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0229 18:01:00.867837       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 18:01:00.875995       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 18:01:00.884447       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0229 18:01:00.885827       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0229 18:01:00.887045       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0229 18:01:00.888591       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0229 18:01:00.948935       1 shared_informer.go:318] Caches are synced for daemon sets
	I0229 18:01:00.955618       1 shared_informer.go:318] Caches are synced for taint
	I0229 18:01:00.955884       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0229 18:01:00.956029       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-070600"
	I0229 18:01:00.956101       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0229 18:01:00.956217       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0229 18:01:00.956535       1 taint_manager.go:210] "Sending events to api server"
	I0229 18:01:00.957019       1 event.go:307] "Event occurred" object="functional-070600" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-070600 event: Registered Node functional-070600 in Controller"
	I0229 18:01:01.336774       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 18:01:01.343988       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 18:01:01.344012       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [9cd2a3e5d566] <==
	I0229 17:58:33.247855       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wj6dl"
	I0229 17:58:33.266274       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 17:58:33.285866       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rlkxp"
	I0229 17:58:33.311208       1 shared_informer.go:318] Caches are synced for attach detach
	I0229 17:58:33.317712       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6h24q"
	I0229 17:58:33.349388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="163.763074ms"
	I0229 17:58:33.404710       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.056547ms"
	I0229 17:58:33.405175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="189.109µs"
	I0229 17:58:33.406427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="144.107µs"
	I0229 17:58:33.429478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="210.01µs"
	I0229 17:58:33.655587       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 17:58:33.655818       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 17:58:33.674733       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0229 17:58:33.695760       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-6h24q"
	I0229 17:58:33.729656       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 17:58:33.747801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.81955ms"
	I0229 17:58:33.794478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.453034ms"
	I0229 17:58:33.825106       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="30.467165ms"
	I0229 17:58:33.825509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.705µs"
	I0229 17:58:35.541628       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.804µs"
	I0229 17:58:35.550815       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.303µs"
	I0229 17:58:35.555051       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.104µs"
	I0229 17:58:35.573045       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.304µs"
	I0229 17:59:14.557467       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.499896ms"
	I0229 17:59:14.557925       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="177.808µs"
	
	
	==> kube-proxy [18867ea9eda5] <==
	I0229 18:00:46.413081       1 server_others.go:69] "Using iptables proxy"
	I0229 18:00:48.461992       1 node.go:141] Successfully retrieved node IP: 172.26.52.106
	I0229 18:00:48.546882       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 18:00:48.547106       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 18:00:48.550386       1 server_others.go:152] "Using iptables Proxier"
	I0229 18:00:48.550761       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 18:00:48.551090       1 server.go:846] "Version info" version="v1.28.4"
	I0229 18:00:48.551455       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:00:48.553080       1 config.go:188] "Starting service config controller"
	I0229 18:00:48.553346       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 18:00:48.553581       1 config.go:97] "Starting endpoint slice config controller"
	I0229 18:00:48.553722       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 18:00:48.554360       1 config.go:315] "Starting node config controller"
	I0229 18:00:48.555994       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 18:00:48.654678       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 18:00:48.654977       1 shared_informer.go:318] Caches are synced for service config
	I0229 18:00:48.657374       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [55e524bd0d05] <==
	I0229 17:58:35.567621       1 server_others.go:69] "Using iptables proxy"
	I0229 17:58:35.588053       1 node.go:141] Successfully retrieved node IP: 172.26.52.106
	I0229 17:58:35.634870       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 17:58:35.634988       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 17:58:35.638893       1 server_others.go:152] "Using iptables Proxier"
	I0229 17:58:35.639017       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 17:58:35.639788       1 server.go:846] "Version info" version="v1.28.4"
	I0229 17:58:35.639817       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 17:58:35.640672       1 config.go:188] "Starting service config controller"
	I0229 17:58:35.640826       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 17:58:35.640889       1 config.go:97] "Starting endpoint slice config controller"
	I0229 17:58:35.641047       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 17:58:35.641909       1 config.go:315] "Starting node config controller"
	I0229 17:58:35.642109       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 17:58:35.741664       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 17:58:35.741715       1 shared_informer.go:318] Caches are synced for service config
	I0229 17:58:35.742320       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [456b7c8892b6] <==
	E0229 17:58:18.463401       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 17:58:18.577927       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 17:58:18.578281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 17:58:18.616406       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 17:58:18.616926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 17:58:18.640419       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 17:58:18.640451       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 17:58:18.706448       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 17:58:18.706824       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 17:58:18.709292       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 17:58:18.712644       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 17:58:18.721038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 17:58:18.721283       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 17:58:18.753805       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 17:58:18.753842       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 17:58:18.781857       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 17:58:18.782013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 17:58:18.806456       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 17:58:18.806511       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 17:58:18.806479       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 17:58:18.807125       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0229 17:58:20.748062       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 18:00:25.808273       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0229 18:00:25.808901       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0229 18:00:25.809363       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4bb2802f8c89] <==
	W0229 18:00:48.436708       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 18:00:48.436737       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0229 18:00:48.436963       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 18:00:48.437019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 18:00:48.437759       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 18:00:48.437907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 18:00:48.438135       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 18:00:48.438287       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 18:00:48.442729       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 18:00:48.442793       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 18:00:48.443063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 18:00:48.443938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 18:00:48.443415       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 18:00:48.444161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 18:00:48.443643       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 18:00:48.444405       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 18:00:48.443749       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0229 18:00:48.444768       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0229 18:00:48.443821       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 18:00:48.444907       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 18:00:48.443905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 18:00:48.445038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 18:00:48.445312       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0229 18:00:48.446162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0229 18:00:50.002337       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 18:00:43 functional-070600 kubelet[2639]: I0229 18:00:43.598914    2639 status_manager.go:853] "Failed to get status for pod" podUID="c0537edd-1cdc-4d52-9c2a-743c59b3d0a1" pod="kube-system/coredns-5dd5756b68-rlkxp" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-rlkxp\": dial tcp 172.26.52.106:8441: connect: connection refused"
	Feb 29 18:00:43 functional-070600 kubelet[2639]: I0229 18:00:43.610099    2639 status_manager.go:853] "Failed to get status for pod" podUID="578ab2cc-0eab-4572-8d30-0cabd99bfa92" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 172.26.52.106:8441: connect: connection refused"
	Feb 29 18:00:44 functional-070600 kubelet[2639]: E0229 18:00:44.151921    2639 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-070600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-070600?resourceVersion=0&timeout=10s\": dial tcp 172.26.52.106:8441: connect: connection refused"
	Feb 29 18:00:44 functional-070600 kubelet[2639]: E0229 18:00:44.152866    2639 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-070600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-070600?timeout=10s\": dial tcp 172.26.52.106:8441: connect: connection refused"
	Feb 29 18:00:44 functional-070600 kubelet[2639]: E0229 18:00:44.153670    2639 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-070600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-070600?timeout=10s\": dial tcp 172.26.52.106:8441: connect: connection refused"
	Feb 29 18:00:44 functional-070600 kubelet[2639]: E0229 18:00:44.154992    2639 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-070600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-070600?timeout=10s\": dial tcp 172.26.52.106:8441: connect: connection refused"
	Feb 29 18:00:44 functional-070600 kubelet[2639]: E0229 18:00:44.156627    2639 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-070600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-070600?timeout=10s\": dial tcp 172.26.52.106:8441: connect: connection refused"
	Feb 29 18:00:44 functional-070600 kubelet[2639]: E0229 18:00:44.156815    2639 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Feb 29 18:00:45 functional-070600 kubelet[2639]: I0229 18:00:45.033114    2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98c7f78e900ebe32fa45aed6a74002c4f9d2793675f4c3e4ff964e813723ab94"
	Feb 29 18:00:45 functional-070600 kubelet[2639]: I0229 18:00:45.102323    2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9470ed5807fa29854285c35aee0eeced179ebf50a646de93243aa90f2a13fcf3"
	Feb 29 18:00:45 functional-070600 kubelet[2639]: I0229 18:00:45.163699    2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb8ea16dc7b7e1e2a38b707ddf05cf2516ef54939421618f6b37eac6e4dc2bae"
	Feb 29 18:00:45 functional-070600 kubelet[2639]: I0229 18:00:45.186142    2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11db95c329a19130299e8585fa78bf4542f99d9e6496705a98a68e6af060bc8e"
	Feb 29 18:00:45 functional-070600 kubelet[2639]: I0229 18:00:45.231328    2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6937c7ae950610a8667f2d312e2ee0edb7790afaf11096c58425f0e14e430de4"
	Feb 29 18:00:45 functional-070600 kubelet[2639]: I0229 18:00:45.264374    2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02c992545b6fd7d2c8e2d07e87784927cc10623106fb2a35abe764e64a502fea"
	Feb 29 18:00:45 functional-070600 kubelet[2639]: I0229 18:00:45.413287    2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b35016540762f47dbd525544a82f514a7843d55f4c7a3ccf3d1d0a7587b26e6e"
	Feb 29 18:01:21 functional-070600 kubelet[2639]: E0229 18:01:21.288657    2639 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 18:01:21 functional-070600 kubelet[2639]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 18:01:21 functional-070600 kubelet[2639]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 18:01:21 functional-070600 kubelet[2639]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 18:01:21 functional-070600 kubelet[2639]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 18:02:21 functional-070600 kubelet[2639]: E0229 18:02:21.287988    2639 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 18:02:21 functional-070600 kubelet[2639]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 18:02:21 functional-070600 kubelet[2639]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 18:02:21 functional-070600 kubelet[2639]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 18:02:21 functional-070600 kubelet[2639]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [dfbec380db53] <==
	I0229 17:58:41.741135       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 17:58:41.762314       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 17:58:41.762393       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 17:58:41.774304       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 17:58:41.775191       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a88ab1a0-4ffd-4fc3-808b-53a2b516b76f", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-070600_4f79a4e4-d352-436c-b6d5-f869b7a5ce85 became leader
	I0229 17:58:41.775700       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-070600_4f79a4e4-d352-436c-b6d5-f869b7a5ce85!
	I0229 17:58:41.876942       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-070600_4f79a4e4-d352-436c-b6d5-f869b7a5ce85!
	
	
	==> storage-provisioner [fd1864fb4514] <==
	I0229 18:00:46.713815       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 18:00:48.473360       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 18:00:48.474976       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 18:01:05.910453       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 18:01:05.910889       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a88ab1a0-4ffd-4fc3-808b-53a2b516b76f", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-070600_8201d13e-1ba6-46c3-b571-44f97e6fa152 became leader
	I0229 18:01:05.911590       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-070600_8201d13e-1ba6-46c3-b571-44f97e6fa152!
	I0229 18:01:06.014073       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-070600_8201d13e-1ba6-46c3-b571-44f97e6fa152!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:02:32.606307   10564 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-070600 -n functional-070600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-070600 -n functional-070600: (11.0106724s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-070600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (30.61s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-070600 config unset cpus" to be -""- but got *"W0229 18:05:22.721014    7632 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-070600 config get cpus: exit status 14 (264.949ms)

                                                
                                                
** stderr ** 
	W0229 18:05:23.044076    3188 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-070600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0229 18:05:23.044076    3188 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-070600 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0229 18:05:23.297003    2136 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-070600 config get cpus" to be -""- but got *"W0229 18:05:23.552384    8680 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-070600 config unset cpus" to be -""- but got *"W0229 18:05:23.830115    9876 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-070600 config get cpus: exit status 14 (227.7598ms)

                                                
                                                
** stderr ** 
	W0229 18:05:24.070860    7152 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-070600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0229 18:05:24.070860    7152 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-070600 service --namespace=default --https --url hello-node: exit status 1 (15.0257711s)

                                                
                                                
** stderr ** 
	W0229 18:06:05.739663    6980 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-070600 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-070600 service hello-node --url --format={{.IP}}: exit status 1 (15.0146706s)

                                                
                                                
** stderr ** 
	W0229 18:06:20.806157    9404 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-070600 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-070600 service hello-node --url: exit status 1 (15.0235907s)

                                                
                                                
** stderr ** 
	W0229 18:06:35.832186    5076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-070600 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (395.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-056900 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv
E0229 18:18:07.038944    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:20:23.062671    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:20:31.759407    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 18:20:50.901359    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:23:34.949318    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-056900 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv: exit status 109 (6m35.0591055s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-056900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node ingress-addon-legacy-056900 in cluster ingress-addon-legacy-056900
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating hyperv VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Feb 29 18:23:58 ingress-addon-legacy-056900 kubelet[36725]: F0229 18:23:58.305689   36725 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	  Feb 29 18:23:59 ingress-addon-legacy-056900 kubelet[36916]: F0229 18:23:59.500842   36916 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	  Feb 29 18:24:00 ingress-addon-legacy-056900 kubelet[37109]: F0229 18:24:00.788452   37109 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:17:31.215115    3896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 18:17:31.270743    3896 out.go:291] Setting OutFile to fd 1492 ...
	I0229 18:17:31.271165    3896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:17:31.271165    3896 out.go:304] Setting ErrFile to fd 1448...
	I0229 18:17:31.271165    3896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:17:31.287064    3896 out.go:298] Setting JSON to false
	I0229 18:17:31.290790    3896 start.go:129] hostinfo: {"hostname":"minikube5","uptime":52388,"bootTime":1709178262,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 18:17:31.290790    3896 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 18:17:31.293616    3896 out.go:177] * [ingress-addon-legacy-056900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 18:17:31.294803    3896 notify.go:220] Checking for updates...
	I0229 18:17:31.295316    3896 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 18:17:31.295831    3896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:17:31.296308    3896 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 18:17:31.297344    3896 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:17:31.297530    3896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:17:31.298843    3896 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:17:36.180818    3896 out.go:177] * Using the hyperv driver based on user configuration
	I0229 18:17:36.181547    3896 start.go:299] selected driver: hyperv
	I0229 18:17:36.181636    3896 start.go:903] validating driver "hyperv" against <nil>
	I0229 18:17:36.181687    3896 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:17:36.229424    3896 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:17:36.230360    3896 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:17:36.230360    3896 cni.go:84] Creating CNI manager for ""
	I0229 18:17:36.230360    3896 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 18:17:36.230894    3896 start_flags.go:323] config:
	{Name:ingress-addon-legacy-056900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-056900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0229 18:17:36.231130    3896 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:17:36.231445    3896 out.go:177] * Starting control plane node ingress-addon-legacy-056900 in cluster ingress-addon-legacy-056900
	I0229 18:17:36.232776    3896 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 18:17:36.280791    3896 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0229 18:17:36.281687    3896 cache.go:56] Caching tarball of preloaded images
	I0229 18:17:36.281823    3896 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 18:17:36.282532    3896 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0229 18:17:36.283174    3896 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 18:17:36.353643    3896 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0229 18:17:40.418152    3896 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 18:17:40.426265    3896 preload.go:256] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0229 18:17:41.508613    3896 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0229 18:17:41.517572    3896 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\config.json ...
	I0229 18:17:41.517572    3896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\config.json: {Name:mk04fd7eb46c273fa15a4b06402145536d4909e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:17:41.518808    3896 start.go:365] acquiring machines lock for ingress-addon-legacy-056900: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:17:41.519904    3896 start.go:369] acquired machines lock for "ingress-addon-legacy-056900" in 0s
	I0229 18:17:41.520204    3896 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-056900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-056900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:17:41.520524    3896 start.go:125] createHost starting for "" (driver="hyperv")
	I0229 18:17:41.521805    3896 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0229 18:17:41.522121    3896 start.go:159] libmachine.API.Create for "ingress-addon-legacy-056900" (driver="hyperv")
	I0229 18:17:41.522187    3896 client.go:168] LocalClient.Create starting
	I0229 18:17:41.522637    3896 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 18:17:41.522773    3896 main.go:141] libmachine: Decoding PEM data...
	I0229 18:17:41.522843    3896 main.go:141] libmachine: Parsing certificate...
	I0229 18:17:41.523033    3896 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 18:17:41.523280    3896 main.go:141] libmachine: Decoding PEM data...
	I0229 18:17:41.523343    3896 main.go:141] libmachine: Parsing certificate...
	I0229 18:17:41.523405    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 18:17:43.363773    3896 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 18:17:43.363773    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:17:43.371842    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 18:17:44.968968    3896 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 18:17:44.968968    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:17:44.976132    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 18:17:46.309501    3896 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 18:17:46.309501    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:17:46.309501    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 18:17:49.590611    3896 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 18:17:49.590611    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:17:49.593023    3896 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:17:49.926555    3896 main.go:141] libmachine: Creating SSH key...
	I0229 18:17:50.319618    3896 main.go:141] libmachine: Creating VM...
	I0229 18:17:50.319618    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 18:17:52.870995    3896 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 18:17:52.870995    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:17:52.870995    3896 main.go:141] libmachine: Using switch "Default Switch"
	I0229 18:17:52.870995    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 18:17:54.463354    3896 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 18:17:54.463354    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:17:54.463354    3896 main.go:141] libmachine: Creating VHD
	I0229 18:17:54.470645    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 18:17:57.979542    3896 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-05690
	                          0\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 82CD9403-9BAB-440E-96AC-5A69CD89E5AA
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 18:17:57.979542    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:17:57.979542    3896 main.go:141] libmachine: Writing magic tar header
	I0229 18:17:57.979542    3896 main.go:141] libmachine: Writing SSH key tar header
	I0229 18:17:57.998025    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 18:18:00.948764    3896 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:18:00.957733    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:00.957733    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900\disk.vhd' -SizeBytes 20000MB
	I0229 18:18:03.271265    3896 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:18:03.271265    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:03.277922    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ingress-addon-legacy-056900 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900' -SwitchName 'Default Switch' -MemoryStartupBytes 4096MB
	I0229 18:18:06.564920    3896 main.go:141] libmachine: [stdout =====>] : 
	Name                        State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                        ----- ----------- ----------------- ------   ------             -------
	ingress-addon-legacy-056900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 18:18:06.564920    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:06.564920    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ingress-addon-legacy-056900 -DynamicMemoryEnabled $false
	I0229 18:18:08.631923    3896 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:18:08.631923    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:08.631923    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ingress-addon-legacy-056900 -Count 2
	I0229 18:18:10.599235    3896 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:18:10.599235    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:10.611035    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ingress-addon-legacy-056900 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900\boot2docker.iso'
	I0229 18:18:12.938464    3896 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:18:12.948444    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:12.948444    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ingress-addon-legacy-056900 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900\disk.vhd'
	I0229 18:18:15.342844    3896 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:18:15.342914    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:15.342914    3896 main.go:141] libmachine: Starting VM...
	I0229 18:18:15.342985    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ingress-addon-legacy-056900
	I0229 18:18:17.938155    3896 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:18:17.938155    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:17.938155    3896 main.go:141] libmachine: Waiting for host to start...
	I0229 18:18:17.945698    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:18:19.993699    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:18:19.993699    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:19.994068    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:18:22.288269    3896 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:18:22.288269    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:23.297856    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:18:25.302274    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:18:25.302274    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:25.302274    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:18:27.568161    3896 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:18:27.571404    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:28.577375    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:18:30.561900    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:18:30.561900    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:30.561900    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:18:32.804529    3896 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:18:32.808410    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:33.815563    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:18:35.782948    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:18:35.782948    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:35.782948    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:18:38.082285    3896 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:18:38.082285    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:39.089970    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:18:41.003961    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:18:41.013159    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:41.013159    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:18:43.317828    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:18:43.317828    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:43.327162    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:18:45.253464    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:18:45.255057    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:45.255057    3896 machine.go:88] provisioning docker machine ...
	I0229 18:18:45.255057    3896 buildroot.go:166] provisioning hostname "ingress-addon-legacy-056900"
	I0229 18:18:45.255057    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:18:47.195750    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:18:47.195750    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:47.195750    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:18:49.563500    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:18:49.563500    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:49.579140    3896 main.go:141] libmachine: Using SSH client type: native
	I0229 18:18:49.589194    3896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.57.135 22 <nil> <nil>}
	I0229 18:18:49.589194    3896 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-056900 && echo "ingress-addon-legacy-056900" | sudo tee /etc/hostname
	I0229 18:18:49.734008    3896 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-056900
	
	I0229 18:18:49.734542    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:18:51.675409    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:18:51.684959    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:51.684959    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:18:54.017784    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:18:54.017784    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:54.032023    3896 main.go:141] libmachine: Using SSH client type: native
	I0229 18:18:54.032475    3896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.57.135 22 <nil> <nil>}
	I0229 18:18:54.032475    3896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-056900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-056900/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-056900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:18:54.169810    3896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:18:54.169841    3896 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 18:18:54.169934    3896 buildroot.go:174] setting up certificates
	I0229 18:18:54.169934    3896 provision.go:83] configureAuth start
	I0229 18:18:54.169934    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:18:56.068225    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:18:56.068225    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:56.068225    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:18:58.372798    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:18:58.372798    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:18:58.372798    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:19:00.291083    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:19:00.291083    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:00.300844    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:19:02.599548    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:19:02.599548    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:02.599548    3896 provision.go:138] copyHostCerts
	I0229 18:19:02.609580    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 18:19:02.609951    3896 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 18:19:02.609991    3896 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 18:19:02.610397    3896 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 18:19:02.612135    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 18:19:02.612352    3896 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 18:19:02.612352    3896 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 18:19:02.612352    3896 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 18:19:02.613417    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 18:19:02.613749    3896 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 18:19:02.613749    3896 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 18:19:02.613988    3896 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 18:19:02.614994    3896 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ingress-addon-legacy-056900 san=[172.26.57.135 172.26.57.135 localhost 127.0.0.1 minikube ingress-addon-legacy-056900]
	I0229 18:19:02.702499    3896 provision.go:172] copyRemoteCerts
	I0229 18:19:02.712978    3896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:19:02.712978    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:19:04.630643    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:19:04.640589    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:04.640680    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:19:06.961080    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:19:06.961080    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:06.971290    3896 sshutil.go:53] new ssh client: &{IP:172.26.57.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900\id_rsa Username:docker}
	I0229 18:19:07.076534    3896 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3631969s)
	I0229 18:19:07.076625    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 18:19:07.077095    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:19:07.125099    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 18:19:07.126104    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0229 18:19:07.181269    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 18:19:07.181637    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 18:19:07.229274    3896 provision.go:86] duration metric: configureAuth took 13.058542s
	I0229 18:19:07.229274    3896 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:19:07.229791    3896 config.go:182] Loaded profile config "ingress-addon-legacy-056900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 18:19:07.229874    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:19:09.147523    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:19:09.157380    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:09.157380    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:19:11.486274    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:19:11.486274    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:11.500309    3896 main.go:141] libmachine: Using SSH client type: native
	I0229 18:19:11.500553    3896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.57.135 22 <nil> <nil>}
	I0229 18:19:11.500553    3896 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 18:19:11.627787    3896 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 18:19:11.627883    3896 buildroot.go:70] root file system type: tmpfs
	I0229 18:19:11.628042    3896 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 18:19:11.628042    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:19:13.573555    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:19:13.573555    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:13.583485    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:19:15.882375    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:19:15.882375    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:15.896609    3896 main.go:141] libmachine: Using SSH client type: native
	I0229 18:19:15.897104    3896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.57.135 22 <nil> <nil>}
	I0229 18:19:15.897257    3896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 18:19:16.051174    3896 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 18:19:16.051287    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:19:17.966855    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:19:17.976662    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:17.976662    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:19:20.276247    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:19:20.276247    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:20.290521    3896 main.go:141] libmachine: Using SSH client type: native
	I0229 18:19:20.290941    3896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.57.135 22 <nil> <nil>}
	I0229 18:19:20.290941    3896 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 18:19:21.299372    3896 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 18:19:21.299411    3896 machine.go:91] provisioned docker machine in 36.0423569s
	I0229 18:19:21.299482    3896 client.go:171] LocalClient.Create took 1m39.7716958s
	I0229 18:19:21.299522    3896 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-056900" took 1m39.771833s
	I0229 18:19:21.299522    3896 start.go:300] post-start starting for "ingress-addon-legacy-056900" (driver="hyperv")
	I0229 18:19:21.299522    3896 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:19:21.309785    3896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:19:21.309785    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:19:23.196882    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:19:23.196882    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:23.206248    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:19:25.512903    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:19:25.512903    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:25.523246    3896 sshutil.go:53] new ssh client: &{IP:172.26.57.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900\id_rsa Username:docker}
	I0229 18:19:25.628002    3896 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3179781s)
	I0229 18:19:25.637630    3896 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:19:25.645393    3896 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:19:25.645393    3896 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 18:19:25.646062    3896 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 18:19:25.646728    3896 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> 43562.pem in /etc/ssl/certs
	I0229 18:19:25.646795    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> /etc/ssl/certs/43562.pem
	I0229 18:19:25.658933    3896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:19:25.676870    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /etc/ssl/certs/43562.pem (1708 bytes)
	I0229 18:19:25.722027    3896 start.go:303] post-start completed in 4.4222604s
	I0229 18:19:25.726307    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:19:27.670912    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:19:27.670912    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:27.679368    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:19:30.040929    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:19:30.040929    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:30.050546    3896 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\config.json ...
	I0229 18:19:30.052528    3896 start.go:128] duration metric: createHost completed in 1m48.5259357s
	I0229 18:19:30.053126    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:19:31.972708    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:19:31.982277    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:31.982497    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:19:34.302695    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:19:34.312490    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:34.316535    3896 main.go:141] libmachine: Using SSH client type: native
	I0229 18:19:34.316765    3896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.57.135 22 <nil> <nil>}
	I0229 18:19:34.316765    3896 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 18:19:34.440259    3896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709230774.608687513
	
	I0229 18:19:34.440259    3896 fix.go:206] guest clock: 1709230774.608687513
	I0229 18:19:34.440259    3896 fix.go:219] Guest: 2024-02-29 18:19:34.608687513 +0000 UTC Remote: 2024-02-29 18:19:30.0530493 +0000 UTC m=+118.946545301 (delta=4.555638213s)
	I0229 18:19:34.440259    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:19:36.384677    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:19:36.384677    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:36.384763    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:19:38.693013    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:19:38.693079    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:38.696671    3896 main.go:141] libmachine: Using SSH client type: native
	I0229 18:19:38.696671    3896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.57.135 22 <nil> <nil>}
	I0229 18:19:38.696671    3896 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709230774
	I0229 18:19:38.831120    3896 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 18:19:34 UTC 2024
	
	I0229 18:19:38.831735    3896 fix.go:226] clock set: Thu Feb 29 18:19:34 UTC 2024
	 (err=<nil>)
	I0229 18:19:38.831735    3896 start.go:83] releasing machines lock for "ingress-addon-legacy-056900", held for 1m57.3052645s
	I0229 18:19:38.832078    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:19:40.773596    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:19:40.773674    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:40.773746    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:19:43.104964    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:19:43.104964    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:43.119422    3896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:19:43.119607    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:19:43.130023    3896 ssh_runner.go:195] Run: cat /version.json
	I0229 18:19:43.130023    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:19:45.049766    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:19:45.049766    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:45.049766    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:19:45.097048    3896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:19:45.097048    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:45.097048    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:19:47.384320    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:19:47.384320    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:47.384320    3896 sshutil.go:53] new ssh client: &{IP:172.26.57.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900\id_rsa Username:docker}
	I0229 18:19:47.438643    3896 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:19:47.438740    3896 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:19:47.438804    3896 sshutil.go:53] new ssh client: &{IP:172.26.57.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900\id_rsa Username:docker}
	I0229 18:19:47.560417    3896 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.4407494s)
	I0229 18:19:47.560417    3896 ssh_runner.go:235] Completed: cat /version.json: (4.4301492s)
	I0229 18:19:47.570766    3896 ssh_runner.go:195] Run: systemctl --version
	I0229 18:19:47.589067    3896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 18:19:47.597929    3896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:19:47.605799    3896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 18:19:47.633091    3896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 18:19:47.662280    3896 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:19:47.662280    3896 start.go:475] detecting cgroup driver to use...
	I0229 18:19:47.662648    3896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:19:47.705011    3896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0229 18:19:47.739793    3896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:19:47.758569    3896 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:19:47.766608    3896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:19:47.799072    3896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:19:47.827699    3896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:19:47.860757    3896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:19:47.889423    3896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:19:47.922550    3896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:19:47.951007    3896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:19:47.978744    3896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:19:48.007059    3896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:19:48.195917    3896 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:19:48.233793    3896 start.go:475] detecting cgroup driver to use...
	I0229 18:19:48.245256    3896 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 18:19:48.285966    3896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:19:48.315525    3896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:19:48.357319    3896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:19:48.391196    3896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:19:48.424852    3896 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:19:48.478636    3896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:19:48.503002    3896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:19:48.547067    3896 ssh_runner.go:195] Run: which cri-dockerd
	I0229 18:19:48.563198    3896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 18:19:48.579897    3896 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 18:19:48.625003    3896 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 18:19:48.808948    3896 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 18:19:48.985744    3896 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 18:19:48.986246    3896 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 18:19:49.030831    3896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:19:49.222791    3896 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:19:50.729728    3896 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5067866s)
	I0229 18:19:50.736891    3896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:19:50.779789    3896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:19:50.813217    3896 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I0229 18:19:50.814337    3896 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 18:19:50.823214    3896 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 18:19:50.823214    3896 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 18:19:50.823214    3896 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 18:19:50.823214    3896 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:76:3f:19 Flags:up|broadcast|multicast|running}
	I0229 18:19:50.826766    3896 ip.go:210] interface addr: fe80::841a:4367:8c9:abc/64
	I0229 18:19:50.826828    3896 ip.go:210] interface addr: 172.26.48.1/20
	I0229 18:19:50.836369    3896 ssh_runner.go:195] Run: grep 172.26.48.1	host.minikube.internal$ /etc/hosts
	I0229 18:19:50.838047    3896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:19:50.870014    3896 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0229 18:19:50.877010    3896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:19:50.902092    3896 docker.go:685] Got preloaded images: 
	I0229 18:19:50.902186    3896 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0229 18:19:50.914548    3896 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:19:50.949851    3896 ssh_runner.go:195] Run: which lz4
	I0229 18:19:50.957481    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 18:19:50.966583    3896 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 18:19:50.975135    3896 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:19:50.975278    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0229 18:19:52.753473    3896 docker.go:649] Took 1.795548 seconds to copy over tarball
	I0229 18:19:52.762858    3896 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:19:59.476756    3896 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.7135255s)
	I0229 18:19:59.476914    3896 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:19:59.545591    3896 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:19:59.564803    3896 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0229 18:19:59.610000    3896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:19:59.803332    3896 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:20:05.268732    3896 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.4650333s)
	I0229 18:20:05.275727    3896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:20:05.302410    3896 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0229 18:20:05.302410    3896 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0229 18:20:05.302545    3896 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 18:20:05.319541    3896 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:20:05.334622    3896 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 18:20:05.340792    3896 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0229 18:20:05.340884    3896 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 18:20:05.341296    3896 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:20:05.342701    3896 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 18:20:05.349884    3896 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 18:20:05.349884    3896 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0229 18:20:05.350925    3896 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0229 18:20:05.359391    3896 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 18:20:05.360474    3896 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 18:20:05.362209    3896 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 18:20:05.362445    3896 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0229 18:20:05.367467    3896 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 18:20:05.368400    3896 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0229 18:20:05.385800    3896 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	W0229 18:20:05.445921    3896 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:20:05.517559    3896 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:20:05.600126    3896 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:20:05.680879    3896 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 18:20:05.748888    3896 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	W0229 18:20:05.757851    3896 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.7 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:20:05.833889    3896 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.18.20 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 18:20:05.913734    3896 image.go:187] authn lookup for registry.k8s.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 18:20:05.919448    3896 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0229 18:20:05.930017    3896 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0229 18:20:05.950494    3896 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0229 18:20:05.950589    3896 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.18.20 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.18.20
	I0229 18:20:05.950656    3896 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0229 18:20:05.958207    3896 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0229 18:20:05.968062    3896 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0229 18:20:05.968097    3896 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.18.20 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.18.20
	I0229 18:20:05.968137    3896 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0229 18:20:05.975243    3896 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0229 18:20:05.987662    3896 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0229 18:20:05.995619    3896 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 18:20:06.001549    3896 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.18.20
	I0229 18:20:06.014265    3896 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.18.20
	I0229 18:20:06.025541    3896 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0229 18:20:06.025627    3896 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.18.20 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.18.20
	I0229 18:20:06.025684    3896 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 18:20:06.033662    3896 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0229 18:20:06.059527    3896 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.18.20
	I0229 18:20:06.060654    3896 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0229 18:20:06.070167    3896 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0229 18:20:06.087939    3896 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0229 18:20:06.090966    3896 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0229 18:20:06.090966    3896 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.7 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.7
	I0229 18:20:06.090966    3896 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0229 18:20:06.101098    3896 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0229 18:20:06.105236    3896 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0229 18:20:06.105236    3896 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.18.20 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.18.20
	I0229 18:20:06.105236    3896 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0229 18:20:06.112451    3896 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0229 18:20:06.127499    3896 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0229 18:20:06.127548    3896 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.2 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0229 18:20:06.127617    3896 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0229 18:20:06.134494    3896 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0229 18:20:06.137011    3896 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.7
	I0229 18:20:06.148954    3896 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.18.20
	I0229 18:20:06.165502    3896 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0229 18:20:06.224227    3896 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0229 18:20:06.254426    3896 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0229 18:20:06.254426    3896 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0229 18:20:06.254426    3896 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0229 18:20:06.263818    3896 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0229 18:20:06.297896    3896 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0229 18:20:06.298309    3896 cache_images.go:92] LoadImages completed in 995.7087ms
	W0229 18:20:06.298482    3896 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.18.20: The system cannot find the path specified.
	X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.18.20: The system cannot find the path specified.
	I0229 18:20:06.305667    3896 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:20:06.346589    3896 cni.go:84] Creating CNI manager for ""
	I0229 18:20:06.346761    3896 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 18:20:06.346957    3896 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:20:06.346957    3896 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.57.135 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-056900 NodeName:ingress-addon-legacy-056900 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.57.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.57.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 18:20:06.347143    3896 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.57.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-056900"
	  kubeletExtraArgs:
	    node-ip: 172.26.57.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.57.135"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:20:06.347386    3896 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-056900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.57.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-056900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:20:06.356905    3896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0229 18:20:06.375178    3896 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:20:06.385341    3896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:20:06.403128    3896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0229 18:20:06.433559    3896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0229 18:20:06.463767    3896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0229 18:20:06.503465    3896 ssh_runner.go:195] Run: grep 172.26.57.135	control-plane.minikube.internal$ /etc/hosts
	I0229 18:20:06.510117    3896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.57.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:20:06.532612    3896 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900 for IP: 172.26.57.135
	I0229 18:20:06.532612    3896 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:20:06.533549    3896 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 18:20:06.534144    3896 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 18:20:06.534357    3896 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\client.key
	I0229 18:20:06.534889    3896 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\client.crt with IP's: []
	I0229 18:20:06.812069    3896 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\client.crt ...
	I0229 18:20:06.812069    3896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\client.crt: {Name:mk0303ad7bb29bc925d998473e09c36ced819737 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:20:06.816603    3896 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\client.key ...
	I0229 18:20:06.816603    3896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\client.key: {Name:mk99839ad302826e4362f162a53369faac119fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:20:06.818353    3896 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\apiserver.key.6eedf7ad
	I0229 18:20:06.818353    3896 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\apiserver.crt.6eedf7ad with IP's: [172.26.57.135 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:20:07.301429    3896 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\apiserver.crt.6eedf7ad ...
	I0229 18:20:07.301429    3896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\apiserver.crt.6eedf7ad: {Name:mk9cb9a9fa6680cc7f5441437d859a6bc661d84c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:20:07.306078    3896 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\apiserver.key.6eedf7ad ...
	I0229 18:20:07.306078    3896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\apiserver.key.6eedf7ad: {Name:mke7d3f30e4bbebf5212af5e3395f0275a4decb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:20:07.307053    3896 certs.go:337] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\apiserver.crt.6eedf7ad -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\apiserver.crt
	I0229 18:20:07.311681    3896 certs.go:341] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\apiserver.key.6eedf7ad -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\apiserver.key
	I0229 18:20:07.318303    3896 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\proxy-client.key
	I0229 18:20:07.318303    3896 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\proxy-client.crt with IP's: []
	I0229 18:20:07.570908    3896 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\proxy-client.crt ...
	I0229 18:20:07.581025    3896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\proxy-client.crt: {Name:mk06b63c5fd98add5c44e4aa291495661a4e580b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:20:07.581352    3896 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\proxy-client.key ...
	I0229 18:20:07.581352    3896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\proxy-client.key: {Name:mkc82bf55ebe503a52e092f1db8ba11698c5a52f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:20:07.581352    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 18:20:07.581352    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 18:20:07.581352    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 18:20:07.592424    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 18:20:07.593214    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 18:20:07.593214    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 18:20:07.595557    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 18:20:07.595557    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 18:20:07.595769    3896 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem (1338 bytes)
	W0229 18:20:07.595769    3896 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356_empty.pem, impossibly tiny 0 bytes
	I0229 18:20:07.596419    3896 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 18:20:07.596567    3896 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 18:20:07.596567    3896 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 18:20:07.596567    3896 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 18:20:07.597202    3896 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem (1708 bytes)
	I0229 18:20:07.597535    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem -> /usr/share/ca-certificates/4356.pem
	I0229 18:20:07.597535    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> /usr/share/ca-certificates/43562.pem
	I0229 18:20:07.597535    3896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:20:07.604593    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:20:07.650825    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:20:07.700328    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:20:07.745364    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ingress-addon-legacy-056900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:20:07.792994    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:20:07.838859    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:20:07.889697    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:20:07.939159    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 18:20:07.980269    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem --> /usr/share/ca-certificates/4356.pem (1338 bytes)
	I0229 18:20:08.025307    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /usr/share/ca-certificates/43562.pem (1708 bytes)
	I0229 18:20:08.081852    3896 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:20:08.126587    3896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:20:08.166951    3896 ssh_runner.go:195] Run: openssl version
	I0229 18:20:08.188971    3896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43562.pem && ln -fs /usr/share/ca-certificates/43562.pem /etc/ssl/certs/43562.pem"
	I0229 18:20:08.222525    3896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43562.pem
	I0229 18:20:08.229625    3896 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:55 /usr/share/ca-certificates/43562.pem
	I0229 18:20:08.238614    3896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43562.pem
	I0229 18:20:08.259887    3896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43562.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:20:08.290132    3896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:20:08.319750    3896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:20:08.327755    3896 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:20:08.339092    3896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:20:08.358263    3896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:20:08.385944    3896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4356.pem && ln -fs /usr/share/ca-certificates/4356.pem /etc/ssl/certs/4356.pem"
	I0229 18:20:08.415216    3896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4356.pem
	I0229 18:20:08.422368    3896 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:55 /usr/share/ca-certificates/4356.pem
	I0229 18:20:08.431747    3896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4356.pem
	I0229 18:20:08.451844    3896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4356.pem /etc/ssl/certs/51391683.0"
	I0229 18:20:08.482236    3896 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:20:08.490670    3896 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:20:08.491012    3896 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-056900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-056900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.26.57.135 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:20:08.500257    3896 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:20:08.537027    3896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:20:08.565867    3896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:20:08.594071    3896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:20:08.615369    3896 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:20:08.615369    3896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:20:08.696766    3896 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 18:20:08.701444    3896 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:20:08.972244    3896 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:20:08.972244    3896 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:20:08.972764    3896 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:20:09.183014    3896 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:20:09.185653    3896 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:20:09.185864    3896 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 18:20:09.371611    3896 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:20:09.372743    3896 out.go:204]   - Generating certificates and keys ...
	I0229 18:20:09.374104    3896 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:20:09.374104    3896 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:20:09.927251    3896 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:20:10.094048    3896 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:20:10.351754    3896 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:20:10.593644    3896 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:20:10.679777    3896 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:20:10.694496    3896 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-056900 localhost] and IPs [172.26.57.135 127.0.0.1 ::1]
	I0229 18:20:10.889371    3896 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:20:10.893257    3896 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-056900 localhost] and IPs [172.26.57.135 127.0.0.1 ::1]
	I0229 18:20:11.172415    3896 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:20:11.520378    3896 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:20:12.317191    3896 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:20:12.317780    3896 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:20:12.493444    3896 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:20:12.678052    3896 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:20:12.910603    3896 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:20:13.140600    3896 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:20:13.145767    3896 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:20:13.146453    3896 out.go:204]   - Booting up control plane ...
	I0229 18:20:13.146780    3896 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:20:13.159490    3896 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:20:13.162407    3896 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:20:13.164890    3896 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:20:13.168296    3896 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:20:53.170003    3896 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:20:53.170003    3896 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:20:53.170360    3896 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:20:58.166779    3896 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:20:58.172465    3896 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:21:08.161070    3896 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:21:08.172794    3896 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:21:28.173301    3896 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:21:28.173910    3896 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:22:08.174278    3896 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:22:08.175142    3896 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:22:08.175225    3896 kubeadm.go:322] 
	I0229 18:22:08.175386    3896 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 18:22:08.175541    3896 kubeadm.go:322] 		timed out waiting for the condition
	I0229 18:22:08.175611    3896 kubeadm.go:322] 
	I0229 18:22:08.175611    3896 kubeadm.go:322] 	This error is likely caused by:
	I0229 18:22:08.175611    3896 kubeadm.go:322] 		- The kubelet is not running
	I0229 18:22:08.175611    3896 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:22:08.175611    3896 kubeadm.go:322] 
	I0229 18:22:08.176255    3896 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:22:08.176362    3896 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 18:22:08.176470    3896 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 18:22:08.176470    3896 kubeadm.go:322] 
	I0229 18:22:08.176597    3896 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:22:08.176983    3896 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 18:22:08.177085    3896 kubeadm.go:322] 
	I0229 18:22:08.177271    3896 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:22:08.177390    3896 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:22:08.177491    3896 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 18:22:08.177736    3896 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0229 18:22:08.177736    3896 kubeadm.go:322] 
	I0229 18:22:08.178106    3896 kubeadm.go:322] W0229 18:20:08.866955    1591 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 18:22:08.178870    3896 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:22:08.179150    3896 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0229 18:22:08.179559    3896 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:22:08.179994    3896 kubeadm.go:322] W0229 18:20:13.332466    1591 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 18:22:08.180444    3896 kubeadm.go:322] W0229 18:20:13.334398    1591 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 18:22:08.180444    3896 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:22:08.180444    3896 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 18:22:08.181119    3896 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-056900 localhost] and IPs [172.26.57.135 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-056900 localhost] and IPs [172.26.57.135 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 18:20:08.866955    1591 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 18:20:13.332466    1591 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 18:20:13.334398    1591 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-056900 localhost] and IPs [172.26.57.135 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-056900 localhost] and IPs [172.26.57.135 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 18:20:08.866955    1591 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 18:20:13.332466    1591 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 18:20:13.334398    1591 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 18:22:08.181254    3896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 18:22:08.749223    3896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:22:08.780907    3896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:22:08.795821    3896 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:22:08.798362    3896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 18:22:08.871487    3896 kubeadm.go:322] W0229 18:22:09.040929   19884 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0229 18:22:08.976444    3896 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 18:22:09.014269    3896 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0229 18:22:09.118831    3896 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:22:10.379516    3896 kubeadm.go:322] W0229 18:22:10.548671   19884 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 18:22:10.383512    3896 kubeadm.go:322] W0229 18:22:10.552404   19884 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0229 18:24:05.399342    3896 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 18:24:05.399609    3896 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 18:24:05.400898    3896 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0229 18:24:05.401216    3896 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:24:05.401363    3896 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:24:05.401363    3896 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:24:05.402192    3896 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:24:05.402618    3896 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:24:05.402825    3896 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:24:05.403016    3896 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 18:24:05.403568    3896 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:24:05.404786    3896 out.go:204]   - Generating certificates and keys ...
	I0229 18:24:05.405156    3896 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:24:05.405258    3896 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:24:05.405622    3896 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 18:24:05.405715    3896 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 18:24:05.405715    3896 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 18:24:05.405715    3896 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 18:24:05.405715    3896 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 18:24:05.405715    3896 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 18:24:05.406378    3896 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 18:24:05.406378    3896 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 18:24:05.406378    3896 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 18:24:05.406378    3896 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:24:05.406378    3896 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:24:05.407060    3896 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:24:05.407163    3896 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:24:05.407163    3896 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:24:05.407163    3896 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:24:05.407877    3896 out.go:204]   - Booting up control plane ...
	I0229 18:24:05.407877    3896 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:24:05.407877    3896 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:24:05.407877    3896 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:24:05.408687    3896 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:24:05.408780    3896 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:24:05.408780    3896 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 18:24:05.408780    3896 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:24:05.409338    3896 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:24:05.409338    3896 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:24:05.410038    3896 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:24:05.410228    3896 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:24:05.410493    3896 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:24:05.410595    3896 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:24:05.410921    3896 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:24:05.410987    3896 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 18:24:05.411526    3896 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 18:24:05.411589    3896 kubeadm.go:322] 
	I0229 18:24:05.411715    3896 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0229 18:24:05.411930    3896 kubeadm.go:322] 		timed out waiting for the condition
	I0229 18:24:05.411930    3896 kubeadm.go:322] 
	I0229 18:24:05.411930    3896 kubeadm.go:322] 	This error is likely caused by:
	I0229 18:24:05.411930    3896 kubeadm.go:322] 		- The kubelet is not running
	I0229 18:24:05.411930    3896 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 18:24:05.411930    3896 kubeadm.go:322] 
	I0229 18:24:05.412497    3896 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 18:24:05.412497    3896 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0229 18:24:05.412497    3896 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0229 18:24:05.412497    3896 kubeadm.go:322] 
	I0229 18:24:05.412497    3896 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 18:24:05.413020    3896 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0229 18:24:05.413020    3896 kubeadm.go:322] 
	I0229 18:24:05.413113    3896 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0229 18:24:05.413421    3896 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0229 18:24:05.413668    3896 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0229 18:24:05.413761    3896 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0229 18:24:05.413822    3896 kubeadm.go:322] 
	I0229 18:24:05.413913    3896 kubeadm.go:406] StartCluster complete in 3m56.9098482s
	I0229 18:24:05.420596    3896 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 18:24:05.445542    3896 logs.go:276] 0 containers: []
	W0229 18:24:05.445542    3896 logs.go:278] No container was found matching "kube-apiserver"
	I0229 18:24:05.454986    3896 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 18:24:05.507108    3896 logs.go:276] 0 containers: []
	W0229 18:24:05.509113    3896 logs.go:278] No container was found matching "etcd"
	I0229 18:24:05.518106    3896 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 18:24:05.553121    3896 logs.go:276] 0 containers: []
	W0229 18:24:05.553163    3896 logs.go:278] No container was found matching "coredns"
	I0229 18:24:05.560654    3896 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 18:24:05.617203    3896 logs.go:276] 0 containers: []
	W0229 18:24:05.617203    3896 logs.go:278] No container was found matching "kube-scheduler"
	I0229 18:24:05.625694    3896 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 18:24:05.666419    3896 logs.go:276] 0 containers: []
	W0229 18:24:05.666453    3896 logs.go:278] No container was found matching "kube-proxy"
	I0229 18:24:05.674192    3896 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 18:24:05.698379    3896 logs.go:276] 0 containers: []
	W0229 18:24:05.698457    3896 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 18:24:05.705210    3896 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 18:24:05.732550    3896 logs.go:276] 0 containers: []
	W0229 18:24:05.732591    3896 logs.go:278] No container was found matching "kindnet"
	I0229 18:24:05.732667    3896 logs.go:123] Gathering logs for kubelet ...
	I0229 18:24:05.732705    3896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0229 18:24:05.774049    3896 logs.go:138] Found kubelet problem: Feb 29 18:23:58 ingress-addon-legacy-056900 kubelet[36725]: F0229 18:23:58.305689   36725 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 18:24:05.782348    3896 logs.go:138] Found kubelet problem: Feb 29 18:23:59 ingress-addon-legacy-056900 kubelet[36916]: F0229 18:23:59.500842   36916 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 18:24:05.787782    3896 logs.go:138] Found kubelet problem: Feb 29 18:24:00 ingress-addon-legacy-056900 kubelet[37109]: F0229 18:24:00.788452   37109 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 18:24:05.790574    3896 logs.go:138] Found kubelet problem: Feb 29 18:24:01 ingress-addon-legacy-056900 kubelet[37299]: F0229 18:24:01.985905   37299 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 18:24:05.798801    3896 logs.go:138] Found kubelet problem: Feb 29 18:24:03 ingress-addon-legacy-056900 kubelet[37492]: F0229 18:24:03.266277   37492 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 18:24:05.808895    3896 logs.go:138] Found kubelet problem: Feb 29 18:24:04 ingress-addon-legacy-056900 kubelet[37679]: F0229 18:24:04.537202   37679 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	W0229 18:24:05.819141    3896 logs.go:138] Found kubelet problem: Feb 29 18:24:05 ingress-addon-legacy-056900 kubelet[37872]: F0229 18:24:05.855673   37872 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 18:24:05.819141    3896 logs.go:123] Gathering logs for dmesg ...
	I0229 18:24:05.819141    3896 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 18:24:05.846012    3896 logs.go:123] Gathering logs for describe nodes ...
	I0229 18:24:05.846012    3896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 18:24:05.942399    3896 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 18:24:05.942490    3896 logs.go:123] Gathering logs for Docker ...
	I0229 18:24:05.942523    3896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 18:24:06.001890    3896 logs.go:123] Gathering logs for container status ...
	I0229 18:24:06.001890    3896 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 18:24:06.099289    3896 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 18:22:09.040929   19884 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 18:22:10.548671   19884 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 18:22:10.552404   19884 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 18:24:06.099373    3896 out.go:239] * 
	* 
	W0229 18:24:06.099563    3896 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 18:22:09.040929   19884 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 18:22:10.548671   19884 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 18:22:10.552404   19884 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 18:22:09.040929   19884 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 18:22:10.548671   19884 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 18:22:10.552404   19884 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:24:06.099744    3896 out.go:239] * 
	* 
	W0229 18:24:06.101422    3896 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:24:06.102139    3896 out.go:177] X Problems detected in kubelet:
	I0229 18:24:06.102972    3896 out.go:177]   Feb 29 18:23:58 ingress-addon-legacy-056900 kubelet[36725]: F0229 18:23:58.305689   36725 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 18:24:06.103578    3896 out.go:177]   Feb 29 18:23:59 ingress-addon-legacy-056900 kubelet[36916]: F0229 18:23:59.500842   36916 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 18:24:06.104282    3896 out.go:177]   Feb 29 18:24:00 ingress-addon-legacy-056900 kubelet[37109]: F0229 18:24:00.788452   37109 kubelet.go:1399] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
	I0229 18:24:06.108079    3896 out.go:177] 
	W0229 18:24:06.108960    3896 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 18:22:09.040929   19884 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 18:22:10.548671   19884 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 18:22:10.552404   19884 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0229 18:22:09.040929   19884 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0229 18:22:10.548671   19884 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0229 18:22:10.552404   19884 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 18:24:06.109085    3896 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 18:24:06.109153    3896 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 18:24:06.109642    3896 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p ingress-addon-legacy-056900 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv" : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (395.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (110.11s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-056900 addons enable ingress --alsologtostderr -v=5
E0229 18:25:23.086526    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:25:31.772785    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ingress-addon-legacy-056900 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m39.0236345s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:24:06.660280    5244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 18:24:06.734049    5244 out.go:291] Setting OutFile to fd 1420 ...
	I0229 18:24:06.749625    5244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:24:06.749688    5244 out.go:304] Setting ErrFile to fd 1324...
	I0229 18:24:06.749728    5244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:24:06.764300    5244 mustload.go:65] Loading cluster: ingress-addon-legacy-056900
	I0229 18:24:06.765226    5244 config.go:182] Loaded profile config "ingress-addon-legacy-056900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 18:24:06.765226    5244 addons.go:597] checking whether the cluster is paused
	I0229 18:24:06.766020    5244 config.go:182] Loaded profile config "ingress-addon-legacy-056900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 18:24:06.766020    5244 host.go:66] Checking if "ingress-addon-legacy-056900" exists ...
	I0229 18:24:06.766298    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:24:08.795507    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:24:08.795507    5244 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:24:08.819325    5244 ssh_runner.go:195] Run: systemctl --version
	I0229 18:24:08.819325    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:24:10.752865    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:24:10.759815    5244 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:24:10.759865    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:24:13.083891    5244 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:24:13.083983    5244 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:24:13.084278    5244 sshutil.go:53] new ssh client: &{IP:172.26.57.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900\id_rsa Username:docker}
	I0229 18:24:13.177687    5244 ssh_runner.go:235] Completed: systemctl --version: (4.3581206s)
	I0229 18:24:13.185385    5244 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:24:13.212866    5244 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 18:24:13.214154    5244 config.go:182] Loaded profile config "ingress-addon-legacy-056900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 18:24:13.214154    5244 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-056900"
	I0229 18:24:13.214154    5244 addons.go:234] Setting addon ingress=true in "ingress-addon-legacy-056900"
	I0229 18:24:13.214154    5244 host.go:66] Checking if "ingress-addon-legacy-056900" exists ...
	I0229 18:24:13.215988    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:24:15.132622    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:24:15.132622    5244 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:24:15.143708    5244 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0229 18:24:15.144457    5244 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 18:24:15.144995    5244 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0229 18:24:15.146044    5244 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0229 18:24:15.146044    5244 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0229 18:24:15.146155    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:24:17.123864    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:24:17.123864    5244 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:24:17.133771    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:24:19.485392    5244 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:24:19.485392    5244 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:24:19.485392    5244 sshutil.go:53] new ssh client: &{IP:172.26.57.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900\id_rsa Username:docker}
	I0229 18:24:19.619421    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:24:19.716113    5244 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:19.716220    5244 retry.go:31] will retry after 240.803736ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:19.981125    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:24:20.065443    5244 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:20.065573    5244 retry.go:31] will retry after 372.490119ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:20.455851    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:24:20.561808    5244 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:20.561853    5244 retry.go:31] will retry after 387.452818ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:20.982957    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:24:21.099585    5244 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:21.099585    5244 retry.go:31] will retry after 1.143177422s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:22.260329    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:24:22.385238    5244 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:22.385238    5244 retry.go:31] will retry after 1.738657685s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:24.146395    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:24:24.239498    5244 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:24.239715    5244 retry.go:31] will retry after 2.055477226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:26.311707    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:24:26.402244    5244 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:26.402244    5244 retry.go:31] will retry after 4.11992618s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:30.550115    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:24:30.660135    5244 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:30.660225    5244 retry.go:31] will retry after 3.525092716s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:34.196128    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:24:34.286800    5244 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:34.286800    5244 retry.go:31] will retry after 7.052611833s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:41.360398    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:24:41.450504    5244 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:41.450643    5244 retry.go:31] will retry after 8.394471023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:49.854785    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:24:49.940267    5244 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:24:49.940361    5244 retry.go:31] will retry after 14.84843642s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:25:04.812691    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:25:04.935372    5244 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:25:04.935463    5244 retry.go:31] will retry after 14.190122288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:25:19.143256    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:25:19.231872    5244 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:25:19.231970    5244 retry.go:31] will retry after 26.173731804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:25:45.422735    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0229 18:25:45.511629    5244 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:25:45.511629    5244 addons.go:470] Verifying addon ingress=true in "ingress-addon-legacy-056900"
	I0229 18:25:45.514084    5244 out.go:177] * Verifying ingress addon...
	I0229 18:25:45.516293    5244 out.go:177] 
	W0229 18:25:45.516936    5244 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-056900" does not exist: client config: context "ingress-addon-legacy-056900" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-056900" does not exist: client config: context "ingress-addon-legacy-056900" does not exist]
	W0229 18:25:45.517008    5244 out.go:239] * 
	* 
	W0229 18:25:45.523127    5244 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_addons_2eb5e4e15e556888b35a5aefe6dc4c93587c1b36_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_addons_2eb5e4e15e556888b35a5aefe6dc4c93587c1b36_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 18:25:45.523681    5244 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-056900 -n ingress-addon-legacy-056900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-056900 -n ingress-addon-legacy-056900: exit status 6 (11.0767381s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:25:45.673077    3512 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 18:25:56.576627    3512 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-056900" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-056900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (110.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (105.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-056900 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ingress-addon-legacy-056900 addons enable ingress-dns --alsologtostderr -v=5: exit status 1 (1m34.4808708s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:25:56.723829    9208 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 18:25:56.789000    9208 out.go:291] Setting OutFile to fd 1040 ...
	I0229 18:25:56.805346    9208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:25:56.805346    9208 out.go:304] Setting ErrFile to fd 712...
	I0229 18:25:56.805346    9208 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:25:56.816290    9208 mustload.go:65] Loading cluster: ingress-addon-legacy-056900
	I0229 18:25:56.820145    9208 config.go:182] Loaded profile config "ingress-addon-legacy-056900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 18:25:56.820145    9208 addons.go:597] checking whether the cluster is paused
	I0229 18:25:56.820658    9208 config.go:182] Loaded profile config "ingress-addon-legacy-056900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 18:25:56.820658    9208 host.go:66] Checking if "ingress-addon-legacy-056900" exists ...
	I0229 18:25:56.821787    9208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:25:58.758305    9208 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:25:58.758305    9208 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:25:58.767099    9208 ssh_runner.go:195] Run: systemctl --version
	I0229 18:25:58.767099    9208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:26:00.711073    9208 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:26:00.711155    9208 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:26:00.711233    9208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:26:03.049532    9208 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:26:03.051074    9208 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:26:03.051586    9208 sshutil.go:53] new ssh client: &{IP:172.26.57.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900\id_rsa Username:docker}
	I0229 18:26:03.153598    9208 ssh_runner.go:235] Completed: systemctl --version: (4.3861583s)
	I0229 18:26:03.161515    9208 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:26:03.197387    9208 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0229 18:26:03.199103    9208 config.go:182] Loaded profile config "ingress-addon-legacy-056900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0229 18:26:03.199103    9208 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-056900"
	I0229 18:26:03.199103    9208 addons.go:234] Setting addon ingress-dns=true in "ingress-addon-legacy-056900"
	I0229 18:26:03.199789    9208 host.go:66] Checking if "ingress-addon-legacy-056900" exists ...
	I0229 18:26:03.200597    9208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:26:05.137364    9208 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:26:05.137364    9208 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:26:05.148171    9208 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0229 18:26:05.149980    9208 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0229 18:26:05.150068    9208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0229 18:26:05.150227    9208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-056900 ).state
	I0229 18:26:07.098621    9208 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:26:07.098621    9208 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:26:07.098621    9208 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-056900 ).networkadapters[0]).ipaddresses[0]
	I0229 18:26:09.427044    9208 main.go:141] libmachine: [stdout =====>] : 172.26.57.135
	
	I0229 18:26:09.427044    9208 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:26:09.438052    9208 sshutil.go:53] new ssh client: &{IP:172.26.57.135 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ingress-addon-legacy-056900\id_rsa Username:docker}
	I0229 18:26:09.590855    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:26:09.723216    9208 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:09.723302    9208 retry.go:31] will retry after 272.518284ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:10.014630    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:26:10.099080    9208 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:10.099114    9208 retry.go:31] will retry after 237.840467ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:10.347359    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:26:10.437951    9208 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:10.438061    9208 retry.go:31] will retry after 738.339941ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:11.191526    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:26:11.314900    9208 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:11.314900    9208 retry.go:31] will retry after 987.126491ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:12.331114    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:26:12.429468    9208 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:12.429468    9208 retry.go:31] will retry after 1.31303771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:13.769455    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:26:13.907967    9208 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:13.907967    9208 retry.go:31] will retry after 956.124948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:14.884929    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:26:14.972359    9208 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:14.972359    9208 retry.go:31] will retry after 3.918718832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:18.900380    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:26:18.991877    9208 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:18.992036    9208 retry.go:31] will retry after 3.724731562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:22.736555    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:26:22.826160    9208 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:22.826229    9208 retry.go:31] will retry after 7.248415103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:30.099140    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:26:30.180115    9208 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:30.180115    9208 retry.go:31] will retry after 9.299208019s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:39.500953    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:26:39.609949    9208 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:39.609949    9208 retry.go:31] will retry after 11.777738169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:51.405152    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:26:51.492034    9208 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:26:51.492250    9208 retry.go:31] will retry after 18.379555274s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:27:09.892611    9208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0229 18:27:09.983057    9208 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0229 18:27:09.983057    9208 retry.go:31] will retry after 33.292524427s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-056900 -n ingress-addon-legacy-056900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-056900 -n ingress-addon-legacy-056900: exit status 6 (10.9189723s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:27:31.231400    9872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 18:27:42.000304    9872 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-056900" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-056900" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (105.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (210.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-680500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0229 18:45:23.152746    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:45:31.834802    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p mount-start-2-680500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: exit status 90 (3m18.8272394s)

                                                
                                                
-- stdout --
	* [mount-start-2-680500] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-2-680500
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:42:55.097360    9412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 18:44:43 mount-start-2-680500 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 18:44:43 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:44:43.483789609Z" level=info msg="Starting up"
	Feb 29 18:44:43 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:44:43.484884746Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 18:44:43 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:44:43.486240840Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=643
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.524059629Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.552008282Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.552117206Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.552189821Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.552206925Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.552299945Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.552402367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.552664724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.552781750Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.552802654Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.552814157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.552911578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.553289559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.556556367Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.556678493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.556834127Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.556958354Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.557156197Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.557218910Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.557232113Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.566959420Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.567129357Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.567166865Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.567184769Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.567200172Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.567359306Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.568976057Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.569276722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.569347737Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.569632799Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.569757526Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.569819939Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.569868550Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.569925162Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.570067993Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.570142409Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.570190219Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.570234329Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.570293542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.570527793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.570722135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571163030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571205539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571223243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571237446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571252249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571268453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571287157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571300360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571314363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571328366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571347370Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571372075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571393580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571438190Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571505204Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571522608Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571535611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571547613Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571648635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571681742Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571694745Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.571933397Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.572227361Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.572292275Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 18:44:43 mount-start-2-680500 dockerd[643]: time="2024-02-29T18:44:43.572324882Z" level=info msg="containerd successfully booted in 0.049775s"
	Feb 29 18:44:43 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:44:43.602893702Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 18:44:43 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:44:43.619192231Z" level=info msg="Loading containers: start."
	Feb 29 18:44:43 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:44:43.875160442Z" level=info msg="Loading containers: done."
	Feb 29 18:44:43 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:44:43.896463488Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 18:44:43 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:44:43.896595616Z" level=info msg="Daemon has completed initialization"
	Feb 29 18:44:43 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:44:43.947003208Z" level=info msg="API listen on [::]:2376"
	Feb 29 18:44:43 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:44:43.947170445Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 18:44:43 mount-start-2-680500 systemd[1]: Started Docker Application Container Engine.
	Feb 29 18:45:12 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:45:12.841528693Z" level=info msg="Processing signal 'terminated'"
	Feb 29 18:45:12 mount-start-2-680500 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 18:45:12 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:45:12.842667038Z" level=info msg="Daemon shutdown complete"
	Feb 29 18:45:12 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:45:12.842803843Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 18:45:12 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:45:12.842952749Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 18:45:12 mount-start-2-680500 dockerd[636]: time="2024-02-29T18:45:12.842995050Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Feb 29 18:45:13 mount-start-2-680500 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 18:45:13 mount-start-2-680500 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 18:45:13 mount-start-2-680500 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 18:45:13 mount-start-2-680500 dockerd[975]: time="2024-02-29T18:45:13.913148099Z" level=info msg="Starting up"
	Feb 29 18:46:13 mount-start-2-680500 dockerd[975]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 18:46:13 mount-start-2-680500 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 18:46:13 mount-start-2-680500 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 18:46:13 mount-start-2-680500 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p mount-start-2-680500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-680500 -n mount-start-2-680500
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p mount-start-2-680500 -n mount-start-2-680500: exit status 6 (11.3085127s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:46:13.949565   13148 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 18:46:25.090129   13148 status.go:415] kubeconfig endpoint: extract IP: "mount-start-2-680500" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-680500" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/StartWithMountSecond (210.14s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (53.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- exec busybox-5b5d89c9d6-4lvtb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- exec busybox-5b5d89c9d6-4lvtb -- sh -c "ping -c 1 172.26.48.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- exec busybox-5b5d89c9d6-4lvtb -- sh -c "ping -c 1 172.26.48.1": exit status 1 (10.4639308s)

                                                
                                                
-- stdout --
	PING 172.26.48.1 (172.26.48.1): 56 data bytes
	
	--- 172.26.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:54:27.407404   11432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.26.48.1) from pod (busybox-5b5d89c9d6-4lvtb): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- exec busybox-5b5d89c9d6-dk9k8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- exec busybox-5b5d89c9d6-dk9k8 -- sh -c "ping -c 1 172.26.48.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- exec busybox-5b5d89c9d6-dk9k8 -- sh -c "ping -c 1 172.26.48.1": exit status 1 (10.4245363s)

                                                
                                                
-- stdout --
	PING 172.26.48.1 (172.26.48.1): 56 data bytes
	
	--- 172.26.48.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:54:38.281671    3968 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.26.48.1) from pod (busybox-5b5d89c9d6-dk9k8): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-421600 -n multinode-421600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-421600 -n multinode-421600: (10.9012043s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 logs -n 25: (7.5742813s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p second-871300                                  | second-871300        | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:35 UTC | 29 Feb 24 18:38 UTC |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| delete  | -p second-871300                                  | second-871300        | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:39 UTC | 29 Feb 24 18:39 UTC |
	| delete  | -p first-868100                                   | first-868100         | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:39 UTC | 29 Feb 24 18:40 UTC |
	| start   | -p mount-start-1-680500                           | mount-start-1-680500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:40 UTC | 29 Feb 24 18:42 UTC |
	|         | --memory=2048 --mount                             |                      |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |                   |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                      |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| mount   | C:\Users\jenkins.minikube5:/minikube-host         | mount-start-1-680500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:42 UTC |                     |
	|         | --profile mount-start-1-680500 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46464 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-1-680500 ssh -- ls                    | mount-start-1-680500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:42 UTC | 29 Feb 24 18:42 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| start   | -p mount-start-2-680500                           | mount-start-2-680500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:42 UTC |                     |
	|         | --memory=2048 --mount                             |                      |                   |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |                   |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |                   |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-680500                           | mount-start-2-680500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:46 UTC | 29 Feb 24 18:47 UTC |
	| delete  | -p mount-start-1-680500                           | mount-start-1-680500 | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:47 UTC |
	| start   | -p multinode-421600                               | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:47 UTC | 29 Feb 24 18:53 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- apply -f                   | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- rollout                    | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- get pods -o                | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- get pods -o                | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- exec                       | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | busybox-5b5d89c9d6-4lvtb --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- exec                       | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | busybox-5b5d89c9d6-dk9k8 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- exec                       | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | busybox-5b5d89c9d6-4lvtb --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- exec                       | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | busybox-5b5d89c9d6-dk9k8 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- exec                       | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | busybox-5b5d89c9d6-4lvtb -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- exec                       | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | busybox-5b5d89c9d6-dk9k8 -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- get pods -o                | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- exec                       | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | busybox-5b5d89c9d6-4lvtb                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- exec                       | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC |                     |
	|         | busybox-5b5d89c9d6-4lvtb -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.26.48.1                          |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- exec                       | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC | 29 Feb 24 18:54 UTC |
	|         | busybox-5b5d89c9d6-dk9k8                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-421600 -- exec                       | multinode-421600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 18:54 UTC |                     |
	|         | busybox-5b5d89c9d6-dk9k8 -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.26.48.1                          |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 18:47:52
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 18:47:52.267870    7340 out.go:291] Setting OutFile to fd 1136 ...
	I0229 18:47:52.268297    7340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:52.268297    7340 out.go:304] Setting ErrFile to fd 1496...
	I0229 18:47:52.268297    7340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:47:52.286755    7340 out.go:298] Setting JSON to false
	I0229 18:47:52.289668    7340 start.go:129] hostinfo: {"hostname":"minikube5","uptime":54209,"bootTime":1709178262,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 18:47:52.289821    7340 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 18:47:52.290547    7340 out.go:177] * [multinode-421600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 18:47:52.291168    7340 notify.go:220] Checking for updates...
	I0229 18:47:52.291837    7340 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 18:47:52.292433    7340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:47:52.293010    7340 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 18:47:52.293604    7340 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:47:52.294211    7340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:47:52.295404    7340 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 18:47:57.216484    7340 out.go:177] * Using the hyperv driver based on user configuration
	I0229 18:47:57.216544    7340 start.go:299] selected driver: hyperv
	I0229 18:47:57.216544    7340 start.go:903] validating driver "hyperv" against <nil>
	I0229 18:47:57.217082    7340 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 18:47:57.262819    7340 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 18:47:57.263997    7340 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 18:47:57.264450    7340 cni.go:84] Creating CNI manager for ""
	I0229 18:47:57.264450    7340 cni.go:136] 0 nodes found, recommending kindnet
	I0229 18:47:57.264450    7340 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0229 18:47:57.264450    7340 start_flags.go:323] config:
	{Name:multinode-421600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-421600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:47:57.264450    7340 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 18:47:57.265641    7340 out.go:177] * Starting control plane node multinode-421600 in cluster multinode-421600
	I0229 18:47:57.266453    7340 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 18:47:57.266780    7340 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 18:47:57.266780    7340 cache.go:56] Caching tarball of preloaded images
	I0229 18:47:57.266923    7340 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:47:57.267217    7340 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 18:47:57.267356    7340 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\config.json ...
	I0229 18:47:57.267786    7340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\config.json: {Name:mk4b807130db92338bfeed8c8219cadcd2f31d67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:47:57.268070    7340 start.go:365] acquiring machines lock for multinode-421600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:47:57.268951    7340 start.go:369] acquired machines lock for "multinode-421600" in 0s
	I0229 18:47:57.269100    7340 start.go:93] Provisioning new machine with config: &{Name:multinode-421600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-421600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:47:57.269100    7340 start.go:125] createHost starting for "" (driver="hyperv")
	I0229 18:47:57.270162    7340 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 18:47:57.270400    7340 start.go:159] libmachine.API.Create for "multinode-421600" (driver="hyperv")
	I0229 18:47:57.270448    7340 client.go:168] LocalClient.Create starting
	I0229 18:47:57.270448    7340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 18:47:57.270448    7340 main.go:141] libmachine: Decoding PEM data...
	I0229 18:47:57.270991    7340 main.go:141] libmachine: Parsing certificate...
	I0229 18:47:57.271149    7340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 18:47:57.271307    7340 main.go:141] libmachine: Decoding PEM data...
	I0229 18:47:57.271352    7340 main.go:141] libmachine: Parsing certificate...
	I0229 18:47:57.271442    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 18:47:59.215714    7340 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 18:47:59.215777    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:47:59.215777    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 18:48:00.831038    7340 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 18:48:00.831038    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:00.839077    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 18:48:02.225151    7340 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 18:48:02.225151    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:02.232866    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 18:48:05.480567    7340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 18:48:05.481509    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:05.484202    7340 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:48:05.835428    7340 main.go:141] libmachine: Creating SSH key...
	I0229 18:48:06.178046    7340 main.go:141] libmachine: Creating VM...
	I0229 18:48:06.178046    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 18:48:08.726009    7340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 18:48:08.736747    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:08.736897    7340 main.go:141] libmachine: Using switch "Default Switch"
	I0229 18:48:08.736897    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 18:48:10.330009    7340 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 18:48:10.330009    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:10.330009    7340 main.go:141] libmachine: Creating VHD
	I0229 18:48:10.337255    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 18:48:13.917944    7340 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 8E0FBC6B-8E56-47FE-858C-2639D9FEF130
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 18:48:13.917944    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:13.917944    7340 main.go:141] libmachine: Writing magic tar header
	I0229 18:48:13.917944    7340 main.go:141] libmachine: Writing SSH key tar header
	I0229 18:48:13.926813    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 18:48:16.892170    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:48:16.892170    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:16.899177    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\disk.vhd' -SizeBytes 20000MB
	I0229 18:48:19.241854    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:48:19.241854    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:19.241854    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-421600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0229 18:48:22.522752    7340 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-421600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 18:48:22.522752    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:22.522752    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-421600 -DynamicMemoryEnabled $false
	I0229 18:48:24.555798    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:48:24.567420    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:24.567499    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-421600 -Count 2
	I0229 18:48:26.550124    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:48:26.550124    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:26.550124    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-421600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\boot2docker.iso'
	I0229 18:48:28.872109    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:48:28.872109    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:28.872109    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-421600 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\disk.vhd'
	I0229 18:48:31.227394    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:48:31.227394    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:31.227394    7340 main.go:141] libmachine: Starting VM...
	I0229 18:48:31.227394    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-421600
	I0229 18:48:33.853468    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:48:33.853629    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:33.853629    7340 main.go:141] libmachine: Waiting for host to start...
	I0229 18:48:33.853629    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:48:35.908413    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:48:35.908413    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:35.910319    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:48:38.214005    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:48:38.214005    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:39.216785    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:48:41.248795    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:48:41.248795    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:41.252101    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:48:43.554989    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:48:43.555405    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:44.555467    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:48:46.513129    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:48:46.521412    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:46.521446    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:48:48.804004    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:48:48.804004    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:49.808784    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:48:51.800195    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:48:51.800195    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:51.810560    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:48:54.090026    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:48:54.099956    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:55.104313    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:48:57.086368    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:48:57.096695    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:57.096695    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:48:59.478009    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:48:59.478009    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:48:59.478009    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:01.439142    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:49:01.439142    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:01.439142    7340 machine.go:88] provisioning docker machine ...
	I0229 18:49:01.449452    7340 buildroot.go:166] provisioning hostname "multinode-421600"
	I0229 18:49:01.449615    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:03.425536    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:49:03.435523    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:03.435523    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:49:05.743223    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:49:05.743223    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:05.759725    7340 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:05.767562    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.62.28 22 <nil> <nil>}
	I0229 18:49:05.767562    7340 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-421600 && echo "multinode-421600" | sudo tee /etc/hostname
	I0229 18:49:05.920640    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-421600
	
	I0229 18:49:05.921032    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:07.892603    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:49:07.892603    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:07.892603    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:49:10.208210    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:49:10.208210    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:10.222843    7340 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:10.223373    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.62.28 22 <nil> <nil>}
	I0229 18:49:10.223373    7340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-421600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-421600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-421600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:49:10.362231    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:49:10.362299    7340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 18:49:10.362369    7340 buildroot.go:174] setting up certificates
	I0229 18:49:10.362369    7340 provision.go:83] configureAuth start
	I0229 18:49:10.362452    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:12.309720    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:49:12.320252    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:12.320331    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:49:14.658437    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:49:14.658437    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:14.658437    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:16.587880    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:49:16.588032    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:16.588115    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:49:18.923855    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:49:18.923855    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:18.923855    7340 provision.go:138] copyHostCerts
	I0229 18:49:18.923855    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 18:49:18.923855    7340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 18:49:18.923855    7340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 18:49:18.924446    7340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 18:49:18.925661    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 18:49:18.925661    7340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 18:49:18.925661    7340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 18:49:18.925661    7340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 18:49:18.926919    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 18:49:18.926919    7340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 18:49:18.926919    7340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 18:49:18.926919    7340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 18:49:18.928221    7340 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-421600 san=[172.26.62.28 172.26.62.28 localhost 127.0.0.1 minikube multinode-421600]
	I0229 18:49:19.105673    7340 provision.go:172] copyRemoteCerts
	I0229 18:49:19.116158    7340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:49:19.116158    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:21.100532    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:49:21.100532    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:21.100625    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:49:23.439807    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:49:23.439807    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:23.451062    7340 sshutil.go:53] new ssh client: &{IP:172.26.62.28 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 18:49:23.557591    7340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4411874s)
	I0229 18:49:23.557591    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 18:49:23.558130    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:49:23.608049    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 18:49:23.608567    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0229 18:49:23.659670    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 18:49:23.659862    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:49:23.710687    7340 provision.go:86] duration metric: configureAuth took 13.3475794s
	I0229 18:49:23.710687    7340 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:49:23.711309    7340 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:49:23.711309    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:25.648945    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:49:25.661831    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:25.661831    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:49:28.013923    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:49:28.024220    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:28.029002    7340 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:28.029117    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.62.28 22 <nil> <nil>}
	I0229 18:49:28.029117    7340 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 18:49:28.159180    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 18:49:28.159180    7340 buildroot.go:70] root file system type: tmpfs
	I0229 18:49:28.159180    7340 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 18:49:28.159180    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:30.106526    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:49:30.106692    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:30.106692    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:49:32.430048    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:49:32.441022    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:32.445279    7340 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:32.445970    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.62.28 22 <nil> <nil>}
	I0229 18:49:32.445970    7340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 18:49:32.593784    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 18:49:32.593784    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:34.515282    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:49:34.525605    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:34.525773    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:49:36.889525    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:49:36.889525    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:36.893907    7340 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:36.894349    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.62.28 22 <nil> <nil>}
	I0229 18:49:36.894427    7340 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 18:49:37.897712    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 18:49:37.897712    7340 machine.go:91] provisioned docker machine in 36.4565492s
	I0229 18:49:37.897712    7340 client.go:171] LocalClient.Create took 1m40.6216826s
	I0229 18:49:37.897712    7340 start.go:167] duration metric: libmachine.API.Create for "multinode-421600" took 1m40.6217301s
	I0229 18:49:37.897712    7340 start.go:300] post-start starting for "multinode-421600" (driver="hyperv")
	I0229 18:49:37.898257    7340 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:49:37.907344    7340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:49:37.907344    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:39.853781    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:49:39.853781    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:39.863971    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:49:42.177098    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:49:42.177098    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:42.186371    7340 sshutil.go:53] new ssh client: &{IP:172.26.62.28 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 18:49:42.297528    7340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3898367s)
	I0229 18:49:42.305293    7340 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:49:42.315000    7340 command_runner.go:130] > NAME=Buildroot
	I0229 18:49:42.315000    7340 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 18:49:42.315000    7340 command_runner.go:130] > ID=buildroot
	I0229 18:49:42.315000    7340 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 18:49:42.315000    7340 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 18:49:42.315000    7340 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:49:42.315000    7340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 18:49:42.316304    7340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 18:49:42.316859    7340 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> 43562.pem in /etc/ssl/certs
	I0229 18:49:42.316859    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> /etc/ssl/certs/43562.pem
	I0229 18:49:42.326549    7340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:49:42.342987    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /etc/ssl/certs/43562.pem (1708 bytes)
	I0229 18:49:42.387825    7340 start.go:303] post-start completed in 4.4893195s
	I0229 18:49:42.390409    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:44.304032    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:49:44.304032    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:44.314367    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:49:46.594859    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:49:46.605025    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:46.605025    7340 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\config.json ...
	I0229 18:49:46.607572    7340 start.go:128] duration metric: createHost completed in 1m49.3324063s
	I0229 18:49:46.607765    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:48.551628    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:49:48.551793    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:48.551878    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:49:50.886096    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:49:50.886096    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:50.900943    7340 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:50.901434    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.62.28 22 <nil> <nil>}
	I0229 18:49:50.901434    7340 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:49:51.021758    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232591.181882394
	
	I0229 18:49:51.021826    7340 fix.go:206] guest clock: 1709232591.181882394
	I0229 18:49:51.021826    7340 fix.go:219] Guest: 2024-02-29 18:49:51.181882394 +0000 UTC Remote: 2024-02-29 18:49:46.6076656 +0000 UTC m=+114.472458001 (delta=4.574216794s)
	I0229 18:49:51.021950    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:52.953463    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:49:52.953593    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:52.953674    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:49:55.295550    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:49:55.295642    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:55.299943    7340 main.go:141] libmachine: Using SSH client type: native
	I0229 18:49:55.300563    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.62.28 22 <nil> <nil>}
	I0229 18:49:55.300563    7340 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709232591
	I0229 18:49:55.439200    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 18:49:51 UTC 2024
	
	I0229 18:49:55.439308    7340 fix.go:226] clock set: Thu Feb 29 18:49:51 UTC 2024
	 (err=<nil>)
	I0229 18:49:55.439524    7340 start.go:83] releasing machines lock for "multinode-421600", held for 1m58.1640179s
	I0229 18:49:55.439806    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:57.355901    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:49:57.355901    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:57.366376    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:49:59.691884    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:49:59.702193    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:49:59.705254    7340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:49:59.705898    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:49:59.714542    7340 ssh_runner.go:195] Run: cat /version.json
	I0229 18:49:59.714542    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:50:01.677549    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:50:01.677549    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:50:01.677652    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:50:01.682938    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:50:01.682938    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:50:01.683169    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:50:04.116327    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:50:04.116327    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:50:04.126927    7340 sshutil.go:53] new ssh client: &{IP:172.26.62.28 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 18:50:04.141657    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:50:04.144200    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:50:04.144527    7340 sshutil.go:53] new ssh client: &{IP:172.26.62.28 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 18:50:04.330016    7340 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 18:50:04.333729    7340 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0229 18:50:04.333729    7340 ssh_runner.go:235] Completed: cat /version.json: (4.6189305s)
	I0229 18:50:04.333729    7340 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6276858s)
	I0229 18:50:04.344453    7340 ssh_runner.go:195] Run: systemctl --version
	I0229 18:50:04.353062    7340 command_runner.go:130] > systemd 252 (252)
	I0229 18:50:04.353391    7340 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0229 18:50:04.362214    7340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 18:50:04.364954    7340 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 18:50:04.375876    7340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:50:04.384615    7340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:50:04.404549    7340 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 18:50:04.413805    7340 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:50:04.413859    7340 start.go:475] detecting cgroup driver to use...
	I0229 18:50:04.414426    7340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:50:04.446363    7340 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 18:50:04.458140    7340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 18:50:04.487369    7340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:50:04.490033    7340 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:50:04.513493    7340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:50:04.541491    7340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:50:04.567884    7340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:50:04.598041    7340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:50:04.629271    7340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:50:04.675501    7340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:50:04.705511    7340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:50:04.721143    7340 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 18:50:04.735525    7340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:50:04.762340    7340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:50:04.942237    7340 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:50:04.971130    7340 start.go:475] detecting cgroup driver to use...
	I0229 18:50:04.984204    7340 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 18:50:05.005818    7340 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 18:50:05.005818    7340 command_runner.go:130] > [Unit]
	I0229 18:50:05.005818    7340 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 18:50:05.005818    7340 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 18:50:05.005818    7340 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 18:50:05.005818    7340 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 18:50:05.005818    7340 command_runner.go:130] > StartLimitBurst=3
	I0229 18:50:05.005818    7340 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 18:50:05.005818    7340 command_runner.go:130] > [Service]
	I0229 18:50:05.005818    7340 command_runner.go:130] > Type=notify
	I0229 18:50:05.005818    7340 command_runner.go:130] > Restart=on-failure
	I0229 18:50:05.005818    7340 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 18:50:05.005818    7340 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 18:50:05.005818    7340 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 18:50:05.005818    7340 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 18:50:05.005818    7340 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 18:50:05.005818    7340 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 18:50:05.005818    7340 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 18:50:05.005818    7340 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 18:50:05.005818    7340 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 18:50:05.005818    7340 command_runner.go:130] > ExecStart=
	I0229 18:50:05.005818    7340 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 18:50:05.005818    7340 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 18:50:05.005818    7340 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 18:50:05.005818    7340 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 18:50:05.005818    7340 command_runner.go:130] > LimitNOFILE=infinity
	I0229 18:50:05.005818    7340 command_runner.go:130] > LimitNPROC=infinity
	I0229 18:50:05.005818    7340 command_runner.go:130] > LimitCORE=infinity
	I0229 18:50:05.005818    7340 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 18:50:05.005818    7340 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 18:50:05.005818    7340 command_runner.go:130] > TasksMax=infinity
	I0229 18:50:05.005818    7340 command_runner.go:130] > TimeoutStartSec=0
	I0229 18:50:05.005818    7340 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 18:50:05.005818    7340 command_runner.go:130] > Delegate=yes
	I0229 18:50:05.005818    7340 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 18:50:05.005818    7340 command_runner.go:130] > KillMode=process
	I0229 18:50:05.005818    7340 command_runner.go:130] > [Install]
	I0229 18:50:05.005818    7340 command_runner.go:130] > WantedBy=multi-user.target
	I0229 18:50:05.014346    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:50:05.045204    7340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:50:05.081546    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:50:05.112282    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:50:05.148967    7340 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:50:05.195214    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:50:05.217326    7340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:50:05.248442    7340 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 18:50:05.260864    7340 ssh_runner.go:195] Run: which cri-dockerd
	I0229 18:50:05.262499    7340 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 18:50:05.274931    7340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 18:50:05.291530    7340 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 18:50:05.333636    7340 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 18:50:05.527599    7340 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 18:50:05.702706    7340 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 18:50:05.702919    7340 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 18:50:05.740989    7340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:50:05.912279    7340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:50:07.396252    7340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4838911s)
	I0229 18:50:07.406253    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 18:50:07.438211    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:50:07.470022    7340 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 18:50:07.654519    7340 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 18:50:07.852288    7340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:50:08.040204    7340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 18:50:08.077496    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:50:08.110524    7340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:50:08.294205    7340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 18:50:08.386016    7340 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 18:50:08.398182    7340 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 18:50:08.407863    7340 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 18:50:08.407863    7340 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 18:50:08.407863    7340 command_runner.go:130] > Device: 0,22	Inode: 884         Links: 1
	I0229 18:50:08.407863    7340 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 18:50:08.407863    7340 command_runner.go:130] > Access: 2024-02-29 18:50:08.480343700 +0000
	I0229 18:50:08.407863    7340 command_runner.go:130] > Modify: 2024-02-29 18:50:08.480343700 +0000
	I0229 18:50:08.407863    7340 command_runner.go:130] > Change: 2024-02-29 18:50:08.484343742 +0000
	I0229 18:50:08.407863    7340 command_runner.go:130] >  Birth: -
	I0229 18:50:08.407863    7340 start.go:543] Will wait 60s for crictl version
	I0229 18:50:08.417677    7340 ssh_runner.go:195] Run: which crictl
	I0229 18:50:08.422791    7340 command_runner.go:130] > /usr/bin/crictl
	I0229 18:50:08.432222    7340 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:50:08.502108    7340 command_runner.go:130] > Version:  0.1.0
	I0229 18:50:08.503153    7340 command_runner.go:130] > RuntimeName:  docker
	I0229 18:50:08.503523    7340 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 18:50:08.503556    7340 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 18:50:08.509680    7340 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 18:50:08.518217    7340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:50:08.542132    7340 command_runner.go:130] > 24.0.7
	I0229 18:50:08.562247    7340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:50:08.592629    7340 command_runner.go:130] > 24.0.7
	I0229 18:50:08.594521    7340 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 18:50:08.594709    7340 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 18:50:08.599188    7340 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 18:50:08.599275    7340 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 18:50:08.599275    7340 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 18:50:08.599275    7340 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:76:3f:19 Flags:up|broadcast|multicast|running}
	I0229 18:50:08.602262    7340 ip.go:210] interface addr: fe80::841a:4367:8c9:abc/64
	I0229 18:50:08.602262    7340 ip.go:210] interface addr: 172.26.48.1/20
	I0229 18:50:08.610677    7340 ssh_runner.go:195] Run: grep 172.26.48.1	host.minikube.internal$ /etc/hosts
	I0229 18:50:08.617497    7340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:50:08.637620    7340 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 18:50:08.645297    7340 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:50:08.668861    7340 docker.go:685] Got preloaded images: 
	I0229 18:50:08.668861    7340 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0229 18:50:08.677006    7340 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:50:08.693369    7340 command_runner.go:139] > {"Repositories":{}}
	I0229 18:50:08.705998    7340 ssh_runner.go:195] Run: which lz4
	I0229 18:50:08.708201    7340 command_runner.go:130] > /usr/bin/lz4
	I0229 18:50:08.711650    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0229 18:50:08.720296    7340 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0229 18:50:08.725456    7340 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:50:08.727136    7340 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 18:50:08.727222    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0229 18:50:10.625150    7340 docker.go:649] Took 1.913059 seconds to copy over tarball
	I0229 18:50:10.635457    7340 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 18:50:20.622129    7340 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.9861183s)
	I0229 18:50:20.622194    7340 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 18:50:20.693486    7340 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 18:50:20.712261    7340 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0229 18:50:20.712569    7340 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0229 18:50:20.752696    7340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:50:20.948363    7340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:50:22.938119    7340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.9896453s)
	I0229 18:50:22.945645    7340 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 18:50:22.969340    7340 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 18:50:22.969340    7340 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 18:50:22.969340    7340 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 18:50:22.969340    7340 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 18:50:22.969340    7340 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 18:50:22.969340    7340 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 18:50:22.969340    7340 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 18:50:22.969340    7340 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:50:22.971877    7340 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0229 18:50:22.971961    7340 cache_images.go:84] Images are preloaded, skipping loading
	I0229 18:50:22.978990    7340 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:50:23.014749    7340 command_runner.go:130] > cgroupfs
	I0229 18:50:23.015267    7340 cni.go:84] Creating CNI manager for ""
	I0229 18:50:23.015478    7340 cni.go:136] 1 nodes found, recommending kindnet
	I0229 18:50:23.015478    7340 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:50:23.015478    7340 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.62.28 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-421600 NodeName:multinode-421600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.62.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.62.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:50:23.015751    7340 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.62.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-421600"
	  kubeletExtraArgs:
	    node-ip: 172.26.62.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.62.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:50:23.015849    7340 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-421600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.62.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-421600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:50:23.024886    7340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:50:23.029849    7340 command_runner.go:130] > kubeadm
	I0229 18:50:23.042345    7340 command_runner.go:130] > kubectl
	I0229 18:50:23.042345    7340 command_runner.go:130] > kubelet
	I0229 18:50:23.042345    7340 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 18:50:23.050260    7340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 18:50:23.068866    7340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0229 18:50:23.098908    7340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:50:23.129852    7340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0229 18:50:23.174954    7340 ssh_runner.go:195] Run: grep 172.26.62.28	control-plane.minikube.internal$ /etc/hosts
	I0229 18:50:23.177317    7340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.62.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:50:23.200940    7340 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600 for IP: 172.26.62.28
	I0229 18:50:23.200940    7340 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:50:23.201772    7340 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 18:50:23.202073    7340 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 18:50:23.203001    7340 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\client.key
	I0229 18:50:23.203239    7340 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\client.crt with IP's: []
	I0229 18:50:23.377011    7340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\client.crt ...
	I0229 18:50:23.377011    7340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\client.crt: {Name:mka7dd9edea0df4f6b75ccefde832dc21e7b4715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:50:23.388258    7340 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\client.key ...
	I0229 18:50:23.388258    7340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\client.key: {Name:mk9f9f34c9ade8cba7cdc05b8fb2f197202c8fd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:50:23.389317    7340 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.key.dc23685e
	I0229 18:50:23.389317    7340 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.crt.dc23685e with IP's: [172.26.62.28 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 18:50:23.644916    7340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.crt.dc23685e ...
	I0229 18:50:23.644916    7340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.crt.dc23685e: {Name:mk7164aa88ac2ed400fcf02adfba04f1d1253413 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:50:23.647577    7340 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.key.dc23685e ...
	I0229 18:50:23.647577    7340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.key.dc23685e: {Name:mk646edc0f2ad3a9bd0e1387b76fa66df6ce28f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:50:23.648367    7340 certs.go:337] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.crt.dc23685e -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.crt
	I0229 18:50:23.655316    7340 certs.go:341] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.key.dc23685e -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.key
	I0229 18:50:23.662083    7340 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.key
	I0229 18:50:23.662083    7340 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.crt with IP's: []
	I0229 18:50:23.925621    7340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.crt ...
	I0229 18:50:23.925621    7340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.crt: {Name:mkb26f4001eb38ec08d6cbca4355ba50c1db83de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:50:23.936733    7340 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.key ...
	I0229 18:50:23.936733    7340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.key: {Name:mk29f4af3f92a5f44441e99561d83b8ec452ff3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:50:23.936991    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 18:50:23.938117    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 18:50:23.938247    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 18:50:23.946185    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 18:50:23.947652    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 18:50:23.949439    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 18:50:23.949549    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 18:50:23.949673    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 18:50:23.949780    7340 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem (1338 bytes)
	W0229 18:50:23.950243    7340 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356_empty.pem, impossibly tiny 0 bytes
	I0229 18:50:23.950243    7340 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 18:50:23.950487    7340 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 18:50:23.950680    7340 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 18:50:23.950833    7340 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 18:50:23.950833    7340 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem (1708 bytes)
	I0229 18:50:23.950833    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> /usr/share/ca-certificates/43562.pem
	I0229 18:50:23.950833    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:50:23.951462    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem -> /usr/share/ca-certificates/4356.pem
	I0229 18:50:23.952741    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 18:50:24.000097    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 18:50:24.045182    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 18:50:24.088734    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 18:50:24.131106    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:50:24.172313    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:50:24.217466    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:50:24.259692    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 18:50:24.302573    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /usr/share/ca-certificates/43562.pem (1708 bytes)
	I0229 18:50:24.343572    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:50:24.378469    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem --> /usr/share/ca-certificates/4356.pem (1338 bytes)
	I0229 18:50:24.430302    7340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 18:50:24.471940    7340 ssh_runner.go:195] Run: openssl version
	I0229 18:50:24.480661    7340 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 18:50:24.489937    7340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4356.pem && ln -fs /usr/share/ca-certificates/4356.pem /etc/ssl/certs/4356.pem"
	I0229 18:50:24.516967    7340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4356.pem
	I0229 18:50:24.518789    7340 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 17:55 /usr/share/ca-certificates/4356.pem
	I0229 18:50:24.518789    7340 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:55 /usr/share/ca-certificates/4356.pem
	I0229 18:50:24.532848    7340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4356.pem
	I0229 18:50:24.535836    7340 command_runner.go:130] > 51391683
	I0229 18:50:24.549672    7340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4356.pem /etc/ssl/certs/51391683.0"
	I0229 18:50:24.580765    7340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43562.pem && ln -fs /usr/share/ca-certificates/43562.pem /etc/ssl/certs/43562.pem"
	I0229 18:50:24.610323    7340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43562.pem
	I0229 18:50:24.617611    7340 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 17:55 /usr/share/ca-certificates/43562.pem
	I0229 18:50:24.618059    7340 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:55 /usr/share/ca-certificates/43562.pem
	I0229 18:50:24.627262    7340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43562.pem
	I0229 18:50:24.635338    7340 command_runner.go:130] > 3ec20f2e
	I0229 18:50:24.644996    7340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43562.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:50:24.672789    7340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:50:24.700894    7340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:50:24.709082    7340 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:50:24.709082    7340 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:50:24.717835    7340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:50:24.726125    7340 command_runner.go:130] > b5213941
	I0229 18:50:24.734023    7340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:50:24.761254    7340 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:50:24.763451    7340 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:50:24.763451    7340 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:50:24.768101    7340 kubeadm.go:404] StartCluster: {Name:multinode-421600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-421600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.26.62.28 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:50:24.774910    7340 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 18:50:24.814518    7340 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 18:50:24.831592    7340 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0229 18:50:24.831592    7340 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0229 18:50:24.831592    7340 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0229 18:50:24.840899    7340 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 18:50:24.868475    7340 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 18:50:24.871301    7340 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0229 18:50:24.871301    7340 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0229 18:50:24.871301    7340 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0229 18:50:24.871301    7340 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:50:24.884618    7340 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 18:50:24.884765    7340 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0229 18:50:25.545798    7340 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:50:25.545798    7340 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:50:38.467365    7340 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0229 18:50:38.467365    7340 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0229 18:50:38.467365    7340 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 18:50:38.467365    7340 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 18:50:38.467365    7340 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:50:38.467930    7340 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 18:50:38.468374    7340 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:50:38.468455    7340 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 18:50:38.468684    7340 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:50:38.468763    7340 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 18:50:38.469202    7340 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:50:38.469202    7340 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 18:50:38.470622    7340 out.go:204]   - Generating certificates and keys ...
	I0229 18:50:38.470904    7340 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 18:50:38.470904    7340 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0229 18:50:38.471118    7340 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 18:50:38.471118    7340 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0229 18:50:38.471477    7340 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:50:38.471477    7340 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 18:50:38.471566    7340 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:50:38.471566    7340 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 18:50:38.471745    7340 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0229 18:50:38.471745    7340 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 18:50:38.471887    7340 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0229 18:50:38.471928    7340 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 18:50:38.472123    7340 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 18:50:38.472177    7340 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0229 18:50:38.472564    7340 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-421600] and IPs [172.26.62.28 127.0.0.1 ::1]
	I0229 18:50:38.472564    7340 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-421600] and IPs [172.26.62.28 127.0.0.1 ::1]
	I0229 18:50:38.472776    7340 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0229 18:50:38.472776    7340 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 18:50:38.473150    7340 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-421600] and IPs [172.26.62.28 127.0.0.1 ::1]
	I0229 18:50:38.473199    7340 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-421600] and IPs [172.26.62.28 127.0.0.1 ::1]
	I0229 18:50:38.473358    7340 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:50:38.473409    7340 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 18:50:38.473627    7340 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:50:38.473627    7340 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 18:50:38.474037    7340 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0229 18:50:38.474037    7340 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 18:50:38.474037    7340 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:50:38.474037    7340 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 18:50:38.474037    7340 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:50:38.474037    7340 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 18:50:38.474565    7340 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:50:38.474656    7340 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 18:50:38.474832    7340 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:50:38.474832    7340 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 18:50:38.474832    7340 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:50:38.474832    7340 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 18:50:38.475408    7340 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:50:38.475408    7340 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 18:50:38.475511    7340 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:50:38.475511    7340 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 18:50:38.476590    7340 out.go:204]   - Booting up control plane ...
	I0229 18:50:38.476880    7340 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:50:38.476880    7340 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 18:50:38.477140    7340 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:50:38.477140    7340 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 18:50:38.477140    7340 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:50:38.477140    7340 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 18:50:38.477140    7340 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:50:38.477140    7340 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:50:38.477764    7340 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:50:38.477809    7340 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:50:38.477841    7340 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 18:50:38.477841    7340 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0229 18:50:38.477841    7340 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:50:38.477841    7340 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 18:50:38.478432    7340 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.005273 seconds
	I0229 18:50:38.478432    7340 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.005273 seconds
	I0229 18:50:38.478793    7340 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 18:50:38.478793    7340 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0229 18:50:38.479297    7340 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 18:50:38.479297    7340 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0229 18:50:38.479401    7340 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0229 18:50:38.479401    7340 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0229 18:50:38.479401    7340 kubeadm.go:322] [mark-control-plane] Marking the node multinode-421600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 18:50:38.479401    7340 command_runner.go:130] > [mark-control-plane] Marking the node multinode-421600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0229 18:50:38.479401    7340 kubeadm.go:322] [bootstrap-token] Using token: 0vmpsr.0jrnc7qk31g5ong8
	I0229 18:50:38.479401    7340 command_runner.go:130] > [bootstrap-token] Using token: 0vmpsr.0jrnc7qk31g5ong8
	I0229 18:50:38.480002    7340 out.go:204]   - Configuring RBAC rules ...
	I0229 18:50:38.480698    7340 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 18:50:38.480755    7340 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0229 18:50:38.480975    7340 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 18:50:38.481006    7340 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0229 18:50:38.481040    7340 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 18:50:38.481040    7340 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0229 18:50:38.481040    7340 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 18:50:38.481040    7340 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0229 18:50:38.481640    7340 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 18:50:38.481640    7340 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0229 18:50:38.481640    7340 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 18:50:38.481640    7340 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0229 18:50:38.481640    7340 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 18:50:38.482165    7340 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0229 18:50:38.482274    7340 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0229 18:50:38.482274    7340 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0229 18:50:38.482274    7340 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0229 18:50:38.482274    7340 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0229 18:50:38.482274    7340 kubeadm.go:322] 
	I0229 18:50:38.482274    7340 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0229 18:50:38.482274    7340 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0229 18:50:38.482274    7340 kubeadm.go:322] 
	I0229 18:50:38.482274    7340 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0229 18:50:38.482274    7340 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0229 18:50:38.482274    7340 kubeadm.go:322] 
	I0229 18:50:38.482813    7340 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0229 18:50:38.482870    7340 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0229 18:50:38.482925    7340 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 18:50:38.482925    7340 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0229 18:50:38.482925    7340 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 18:50:38.482925    7340 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0229 18:50:38.482925    7340 kubeadm.go:322] 
	I0229 18:50:38.482925    7340 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0229 18:50:38.482925    7340 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0229 18:50:38.482925    7340 kubeadm.go:322] 
	I0229 18:50:38.482925    7340 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 18:50:38.482925    7340 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0229 18:50:38.482925    7340 kubeadm.go:322] 
	I0229 18:50:38.483519    7340 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0229 18:50:38.483519    7340 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0229 18:50:38.483573    7340 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 18:50:38.483573    7340 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0229 18:50:38.483573    7340 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 18:50:38.483573    7340 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0229 18:50:38.483573    7340 kubeadm.go:322] 
	I0229 18:50:38.483573    7340 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0229 18:50:38.483573    7340 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0229 18:50:38.483573    7340 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0229 18:50:38.483573    7340 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0229 18:50:38.483573    7340 kubeadm.go:322] 
	I0229 18:50:38.483573    7340 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0vmpsr.0jrnc7qk31g5ong8 \
	I0229 18:50:38.483573    7340 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0vmpsr.0jrnc7qk31g5ong8 \
	I0229 18:50:38.483573    7340 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e \
	I0229 18:50:38.483573    7340 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e \
	I0229 18:50:38.484673    7340 command_runner.go:130] > 	--control-plane 
	I0229 18:50:38.484673    7340 kubeadm.go:322] 	--control-plane 
	I0229 18:50:38.484673    7340 kubeadm.go:322] 
	I0229 18:50:38.484673    7340 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0229 18:50:38.484673    7340 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0229 18:50:38.484673    7340 kubeadm.go:322] 
	I0229 18:50:38.484673    7340 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0vmpsr.0jrnc7qk31g5ong8 \
	I0229 18:50:38.484673    7340 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0vmpsr.0jrnc7qk31g5ong8 \
	I0229 18:50:38.484673    7340 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e 
	I0229 18:50:38.484673    7340 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e 
	I0229 18:50:38.484673    7340 cni.go:84] Creating CNI manager for ""
	I0229 18:50:38.484673    7340 cni.go:136] 1 nodes found, recommending kindnet
	I0229 18:50:38.486370    7340 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0229 18:50:38.494971    7340 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 18:50:38.501853    7340 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 18:50:38.501885    7340 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 18:50:38.501885    7340 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 18:50:38.501885    7340 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 18:50:38.501920    7340 command_runner.go:130] > Access: 2024-02-29 18:48:57.853544100 +0000
	I0229 18:50:38.501920    7340 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 18:50:38.501920    7340 command_runner.go:130] > Change: 2024-02-29 18:48:48.933000000 +0000
	I0229 18:50:38.501920    7340 command_runner.go:130] >  Birth: -
	I0229 18:50:38.501986    7340 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 18:50:38.501986    7340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 18:50:38.575082    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 18:50:39.752096    7340 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0229 18:50:39.752096    7340 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0229 18:50:39.752096    7340 command_runner.go:130] > serviceaccount/kindnet created
	I0229 18:50:39.752096    7340 command_runner.go:130] > daemonset.apps/kindnet created
	I0229 18:50:39.752096    7340 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.176949s)
	I0229 18:50:39.752096    7340 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 18:50:39.766516    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:39.768312    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=multinode-421600 minikube.k8s.io/updated_at=2024_02_29T18_50_39_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:39.773920    7340 command_runner.go:130] > -16
	I0229 18:50:39.778317    7340 ops.go:34] apiserver oom_adj: -16
	I0229 18:50:39.993940    7340 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0229 18:50:40.004623    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:40.033502    7340 command_runner.go:130] > node/multinode-421600 labeled
	I0229 18:50:40.125711    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:40.514151    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:40.628177    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:41.013857    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:41.121332    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:41.514784    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:41.636058    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:42.009735    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:42.111978    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:42.505757    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:42.610499    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:43.003785    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:43.116430    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:43.514203    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:43.617218    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:44.010787    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:44.118140    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:44.506742    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:44.622674    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:45.018727    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:45.128179    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:45.521154    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:45.626490    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:46.015676    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:46.118023    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:46.520929    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:46.615232    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:47.014660    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:47.105068    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:47.504950    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:47.614709    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:48.004786    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:48.115509    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:48.509522    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:48.601634    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:49.017887    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:49.117650    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:49.506113    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:49.614249    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:50.021564    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:50.130595    7340 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0229 18:50:50.507630    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:50:50.628820    7340 command_runner.go:130] > NAME      SECRETS   AGE
	I0229 18:50:50.628820    7340 command_runner.go:130] > default   0         0s
	I0229 18:50:50.630898    7340 kubeadm.go:1088] duration metric: took 10.8768677s to wait for elevateKubeSystemPrivileges.
	I0229 18:50:50.630898    7340 kubeadm.go:406] StartCluster complete in 25.8613621s
	I0229 18:50:50.631039    7340 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:50:50.631277    7340 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 18:50:50.633076    7340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:50:50.634055    7340 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 18:50:50.634055    7340 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 18:50:50.634055    7340 addons.go:69] Setting storage-provisioner=true in profile "multinode-421600"
	I0229 18:50:50.634055    7340 addons.go:234] Setting addon storage-provisioner=true in "multinode-421600"
	I0229 18:50:50.635095    7340 addons.go:69] Setting default-storageclass=true in profile "multinode-421600"
	I0229 18:50:50.635095    7340 host.go:66] Checking if "multinode-421600" exists ...
	I0229 18:50:50.635095    7340 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-421600"
	I0229 18:50:50.635240    7340 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:50:50.636244    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:50:50.636798    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:50:50.651963    7340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 18:50:50.652608    7340 kapi.go:59] client config for multinode-421600: &rest.Config{Host:"https://172.26.62.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:50:50.653872    7340 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 18:50:50.654510    7340 round_trippers.go:463] GET https://172.26.62.28:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 18:50:50.654510    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:50.654510    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:50.654510    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:50.678161    7340 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0229 18:50:50.678161    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:50.678161    7340 round_trippers.go:580]     Audit-Id: 81d69d7b-ea99-458e-8f0e-b746ee7937bc
	I0229 18:50:50.678161    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:50.678161    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:50.678161    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:50.678161    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:50.678161    7340 round_trippers.go:580]     Content-Length: 291
	I0229 18:50:50.678161    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:50 GMT
	I0229 18:50:50.678161    7340 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9192a23-827d-4114-8861-df907bfdc0ef","resourceVersion":"226","creationTimestamp":"2024-02-29T18:50:38Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 18:50:50.679029    7340 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9192a23-827d-4114-8861-df907bfdc0ef","resourceVersion":"226","creationTimestamp":"2024-02-29T18:50:38Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 18:50:50.679029    7340 round_trippers.go:463] PUT https://172.26.62.28:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 18:50:50.679029    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:50.679029    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:50.679029    7340 round_trippers.go:473]     Content-Type: application/json
	I0229 18:50:50.679029    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:50.685689    7340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 18:50:50.688062    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:50.688062    7340 round_trippers.go:580]     Content-Length: 291
	I0229 18:50:50.688062    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:50 GMT
	I0229 18:50:50.688062    7340 round_trippers.go:580]     Audit-Id: c8eeeb47-ab90-4d16-9365-931241f28686
	I0229 18:50:50.688062    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:50.688062    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:50.688062    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:50.688062    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:50.688062    7340 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9192a23-827d-4114-8861-df907bfdc0ef","resourceVersion":"310","creationTimestamp":"2024-02-29T18:50:38Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 18:50:50.783741    7340 command_runner.go:130] > apiVersion: v1
	I0229 18:50:50.789175    7340 command_runner.go:130] > data:
	I0229 18:50:50.789175    7340 command_runner.go:130] >   Corefile: |
	I0229 18:50:50.789175    7340 command_runner.go:130] >     .:53 {
	I0229 18:50:50.789243    7340 command_runner.go:130] >         errors
	I0229 18:50:50.789243    7340 command_runner.go:130] >         health {
	I0229 18:50:50.789243    7340 command_runner.go:130] >            lameduck 5s
	I0229 18:50:50.789309    7340 command_runner.go:130] >         }
	I0229 18:50:50.789309    7340 command_runner.go:130] >         ready
	I0229 18:50:50.789309    7340 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0229 18:50:50.789309    7340 command_runner.go:130] >            pods insecure
	I0229 18:50:50.789309    7340 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0229 18:50:50.789309    7340 command_runner.go:130] >            ttl 30
	I0229 18:50:50.789309    7340 command_runner.go:130] >         }
	I0229 18:50:50.789444    7340 command_runner.go:130] >         prometheus :9153
	I0229 18:50:50.789444    7340 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0229 18:50:50.789444    7340 command_runner.go:130] >            max_concurrent 1000
	I0229 18:50:50.789444    7340 command_runner.go:130] >         }
	I0229 18:50:50.789444    7340 command_runner.go:130] >         cache 30
	I0229 18:50:50.789444    7340 command_runner.go:130] >         loop
	I0229 18:50:50.789444    7340 command_runner.go:130] >         reload
	I0229 18:50:50.789531    7340 command_runner.go:130] >         loadbalance
	I0229 18:50:50.789531    7340 command_runner.go:130] >     }
	I0229 18:50:50.789531    7340 command_runner.go:130] > kind: ConfigMap
	I0229 18:50:50.789531    7340 command_runner.go:130] > metadata:
	I0229 18:50:50.789531    7340 command_runner.go:130] >   creationTimestamp: "2024-02-29T18:50:38Z"
	I0229 18:50:50.789531    7340 command_runner.go:130] >   name: coredns
	I0229 18:50:50.789627    7340 command_runner.go:130] >   namespace: kube-system
	I0229 18:50:50.789627    7340 command_runner.go:130] >   resourceVersion: "222"
	I0229 18:50:50.789627    7340 command_runner.go:130] >   uid: 02fa6c60-1e04-4f3a-a567-42fb00116f24
	I0229 18:50:50.794117    7340 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.26.48.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0229 18:50:51.159080    7340 round_trippers.go:463] GET https://172.26.62.28:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 18:50:51.159080    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:51.159080    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:51.159080    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:51.189624    7340 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0229 18:50:51.189702    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:51.189759    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:51.189759    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:51.189759    7340 round_trippers.go:580]     Content-Length: 291
	I0229 18:50:51.189814    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:51 GMT
	I0229 18:50:51.189814    7340 round_trippers.go:580]     Audit-Id: 220ea461-fb73-4839-a4cb-06fe5ff7bef6
	I0229 18:50:51.189814    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:51.189876    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:51.189981    7340 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9192a23-827d-4114-8861-df907bfdc0ef","resourceVersion":"310","creationTimestamp":"2024-02-29T18:50:38Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0229 18:50:51.190069    7340 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-421600" context rescaled to 1 replicas
	I0229 18:50:51.190069    7340 start.go:223] Will wait 6m0s for node &{Name: IP:172.26.62.28 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 18:50:51.190936    7340 out.go:177] * Verifying Kubernetes components...
	I0229 18:50:51.207969    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:50:51.314050    7340 command_runner.go:130] > configmap/coredns replaced
	I0229 18:50:51.314152    7340 start.go:929] {"host.minikube.internal": 172.26.48.1} host record injected into CoreDNS's ConfigMap
	I0229 18:50:51.315177    7340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 18:50:51.315668    7340 kapi.go:59] client config for multinode-421600: &rest.Config{Host:"https://172.26.62.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:50:51.316766    7340 node_ready.go:35] waiting up to 6m0s for node "multinode-421600" to be "Ready" ...
	I0229 18:50:51.317006    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:51.317038    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:51.317038    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:51.317038    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:51.327087    7340 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0229 18:50:51.327087    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:51.327087    7340 round_trippers.go:580]     Audit-Id: 73a0ee15-80f2-489c-acd5-79ac8693daf8
	I0229 18:50:51.327087    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:51.327087    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:51.327087    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:51.327087    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:51.327087    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:51 GMT
	I0229 18:50:51.330266    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:51.818720    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:51.818973    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:51.818973    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:51.818973    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:51.823656    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:50:51.823710    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:51.823710    7340 round_trippers.go:580]     Audit-Id: 5305f555-29cf-4b49-8cd2-7aafb4dd566d
	I0229 18:50:51.823710    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:51.823710    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:51.823710    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:51.823710    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:51.823710    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:51 GMT
	I0229 18:50:51.823710    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:52.327187    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:52.327268    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:52.327268    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:52.327268    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:52.327559    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:50:52.327559    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:52.327559    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:52.327559    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:52.327559    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:52.327559    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:52 GMT
	I0229 18:50:52.327559    7340 round_trippers.go:580]     Audit-Id: 7beaca16-049e-491e-be25-e0355b74a649
	I0229 18:50:52.327559    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:52.327559    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:52.685920    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:50:52.696511    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:50:52.696555    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:50:52.696555    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:50:52.697390    7340 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 18:50:52.698186    7340 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:50:52.698224    7340 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 18:50:52.698266    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:50:52.699499    7340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 18:50:52.699631    7340 kapi.go:59] client config for multinode-421600: &rest.Config{Host:"https://172.26.62.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:50:52.700850    7340 addons.go:234] Setting addon default-storageclass=true in "multinode-421600"
	I0229 18:50:52.701053    7340 host.go:66] Checking if "multinode-421600" exists ...
	I0229 18:50:52.701980    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:50:52.822314    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:52.822314    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:52.822314    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:52.822314    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:52.823402    7340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:50:52.828996    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:52.828996    7340 round_trippers.go:580]     Audit-Id: 3f86e6df-db4d-4b80-8e7a-9c041698630b
	I0229 18:50:52.828996    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:52.828996    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:52.828996    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:52.829122    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:52.829122    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:52 GMT
	I0229 18:50:52.829981    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:53.327604    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:53.327604    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:53.327604    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:53.327604    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:53.328249    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:50:53.331859    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:53.331859    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:53.331859    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:53.331859    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:53 GMT
	I0229 18:50:53.331859    7340 round_trippers.go:580]     Audit-Id: 78576d16-46a6-4062-b7f2-cdfe14eaa683
	I0229 18:50:53.331859    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:53.331859    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:53.331934    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:53.332561    7340 node_ready.go:58] node "multinode-421600" has status "Ready":"False"
	I0229 18:50:53.823018    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:53.823018    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:53.823018    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:53.823018    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:53.827863    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:50:53.827863    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:53.827863    7340 round_trippers.go:580]     Audit-Id: a13efdd8-f2f9-43d2-9b86-085437c57b17
	I0229 18:50:53.827863    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:53.827863    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:53.827863    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:53.827971    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:53.827971    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:53 GMT
	I0229 18:50:53.828313    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:54.320466    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:54.320602    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:54.320602    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:54.320602    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:54.320884    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:50:54.324196    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:54.324196    7340 round_trippers.go:580]     Audit-Id: 792a1808-18d7-4ae3-846b-de34a6d930ec
	I0229 18:50:54.324196    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:54.324196    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:54.324196    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:54.324591    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:54.324712    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:54 GMT
	I0229 18:50:54.324957    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:54.758329    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:50:54.758329    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:50:54.758329    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:50:54.827190    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:54.827190    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:54.827190    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:54.827190    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:54.827817    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:50:54.831914    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:54.831914    7340 round_trippers.go:580]     Audit-Id: 02d7c9fc-2d2d-4342-a8b7-96fdb3168f08
	I0229 18:50:54.831914    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:54.831914    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:54.832003    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:54.832003    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:54.832003    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:54 GMT
	I0229 18:50:54.832326    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:54.850364    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:50:54.850429    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:50:54.850567    7340 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 18:50:54.850567    7340 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 18:50:54.850646    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:50:55.325273    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:55.325273    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:55.325273    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:55.325273    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:55.325960    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:50:55.330685    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:55.330685    7340 round_trippers.go:580]     Audit-Id: 076ab8f3-2f5f-4d31-a7ff-e33b55d7ea3e
	I0229 18:50:55.330685    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:55.330685    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:55.330685    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:55.330685    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:55.330685    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:55 GMT
	I0229 18:50:55.333658    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:55.333884    7340 node_ready.go:58] node "multinode-421600" has status "Ready":"False"
	I0229 18:50:55.828684    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:55.828752    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:55.828752    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:55.828798    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:55.841924    7340 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0229 18:50:55.842001    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:55.842001    7340 round_trippers.go:580]     Audit-Id: 98ecef5c-4d4c-460d-826e-8df0bc337e6b
	I0229 18:50:55.842001    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:55.842073    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:55.842073    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:55.842073    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:55.842073    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:56 GMT
	I0229 18:50:55.842359    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:56.325790    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:56.325826    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:56.325826    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:56.325826    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:56.329573    7340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:50:56.329573    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:56.329573    7340 round_trippers.go:580]     Audit-Id: e042ce69-c7d5-46a3-9d56-161c05dac4d0
	I0229 18:50:56.329573    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:56.329573    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:56.329657    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:56.329657    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:56.329657    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:56 GMT
	I0229 18:50:56.329812    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:56.821594    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:56.821594    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:56.821594    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:56.821594    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:56.828400    7340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:50:56.828400    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:56.828400    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:56.828400    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:56.828498    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:56.828498    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:56.828498    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:56 GMT
	I0229 18:50:56.828498    7340 round_trippers.go:580]     Audit-Id: 96140a0d-8992-4d14-9999-ff7b749f7c69
	I0229 18:50:56.828498    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:56.901494    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:50:56.901494    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:50:56.909157    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:50:57.195601    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:50:57.197747    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:50:57.198107    7340 sshutil.go:53] new ssh client: &{IP:172.26.62.28 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 18:50:57.323655    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:57.323769    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:57.323769    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:57.323769    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:57.331525    7340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 18:50:57.331525    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:57.331525    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:57 GMT
	I0229 18:50:57.331525    7340 round_trippers.go:580]     Audit-Id: ab7e8632-8786-4b01-9caf-4530d7234eb5
	I0229 18:50:57.331525    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:57.331525    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:57.331525    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:57.331525    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:57.332671    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:57.333070    7340 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 18:50:57.829430    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:57.829430    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:57.829430    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:57.829430    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:57.829986    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:50:57.835487    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:57.835536    7340 round_trippers.go:580]     Audit-Id: a07689f7-18e3-46ae-8e50-f711c38bee25
	I0229 18:50:57.835536    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:57.835536    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:57.835536    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:57.835536    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:57.835536    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:57 GMT
	I0229 18:50:57.835536    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:57.836427    7340 node_ready.go:58] node "multinode-421600" has status "Ready":"False"
	I0229 18:50:57.979679    7340 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0229 18:50:57.979794    7340 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0229 18:50:57.979794    7340 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0229 18:50:57.979895    7340 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0229 18:50:57.979895    7340 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0229 18:50:57.979895    7340 command_runner.go:130] > pod/storage-provisioner created
	I0229 18:50:58.326625    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:58.326625    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:58.326625    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:58.326625    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:58.331158    7340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:50:58.331158    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:58.331238    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:58.331238    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:58.331238    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:58.331266    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:58 GMT
	I0229 18:50:58.331266    7340 round_trippers.go:580]     Audit-Id: bc2b8c35-e7c9-45a4-9329-3c6fcc8a6ee3
	I0229 18:50:58.331266    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:58.331266    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:58.829557    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:58.829661    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:58.829661    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:58.829661    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:58.834725    7340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 18:50:58.834725    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:58.834725    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:58.834725    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:58.834725    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:58.834725    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:58.834725    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:58 GMT
	I0229 18:50:58.834725    7340 round_trippers.go:580]     Audit-Id: 3367d8d1-6c55-4e67-b0a2-b853eaaa2980
	I0229 18:50:58.835405    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:59.266369    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:50:59.266369    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:50:59.267258    7340 sshutil.go:53] new ssh client: &{IP:172.26.62.28 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 18:50:59.320581    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:59.320581    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:59.320959    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:59.320959    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:59.325121    7340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:50:59.325121    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:59.325121    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:59.325121    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:59 GMT
	I0229 18:50:59.325255    7340 round_trippers.go:580]     Audit-Id: 9c6a83e6-5aa4-473f-9f73-33aeef17eec4
	I0229 18:50:59.325255    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:59.325255    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:59.325255    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:59.325630    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:50:59.400572    7340 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 18:50:59.635979    7340 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0229 18:50:59.636460    7340 round_trippers.go:463] GET https://172.26.62.28:8443/apis/storage.k8s.io/v1/storageclasses
	I0229 18:50:59.636491    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:59.636551    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:59.636617    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:59.637002    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:50:59.637002    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:59.637002    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:59.637002    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:59.637002    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:59.637002    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:59.637002    7340 round_trippers.go:580]     Content-Length: 1273
	I0229 18:50:59.637002    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:59 GMT
	I0229 18:50:59.637002    7340 round_trippers.go:580]     Audit-Id: 4138a3ab-ee5e-4eaf-9a00-befc73eaa525
	I0229 18:50:59.637002    7340 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"388"},"items":[{"metadata":{"name":"standard","uid":"515057e8-4d91-4f79-b089-fb94aadca826","resourceVersion":"388","creationTimestamp":"2024-02-29T18:50:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T18:50:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0229 18:50:59.641322    7340 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"515057e8-4d91-4f79-b089-fb94aadca826","resourceVersion":"388","creationTimestamp":"2024-02-29T18:50:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T18:50:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0229 18:50:59.641521    7340 round_trippers.go:463] PUT https://172.26.62.28:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0229 18:50:59.641588    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:59.641680    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:59.641708    7340 round_trippers.go:473]     Content-Type: application/json
	I0229 18:50:59.641708    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:59.642448    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:50:59.645320    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:59.645357    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:59.645357    7340 round_trippers.go:580]     Content-Length: 1220
	I0229 18:50:59.645357    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:59 GMT
	I0229 18:50:59.645357    7340 round_trippers.go:580]     Audit-Id: 072a1d88-295f-4b84-a68d-1c2bdad34e5d
	I0229 18:50:59.645357    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:59.645357    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:59.645357    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:59.645357    7340 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"515057e8-4d91-4f79-b089-fb94aadca826","resourceVersion":"388","creationTimestamp":"2024-02-29T18:50:59Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-02-29T18:50:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0229 18:50:59.646537    7340 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 18:50:59.647083    7340 addons.go:505] enable addons completed in 9.0124658s: enabled=[storage-provisioner default-storageclass]
	I0229 18:50:59.821980    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:50:59.822083    7340 round_trippers.go:469] Request Headers:
	I0229 18:50:59.822083    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:50:59.822083    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:50:59.825883    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:50:59.825883    7340 round_trippers.go:577] Response Headers:
	I0229 18:50:59.825883    7340 round_trippers.go:580]     Audit-Id: 94821a86-52e8-4f51-a661-88c952895843
	I0229 18:50:59.825883    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:50:59.825992    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:50:59.825992    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:50:59.825992    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:50:59.826044    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:50:59 GMT
	I0229 18:50:59.826221    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:51:00.333921    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:00.333921    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:00.333921    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:00.333921    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:00.334448    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:00.338105    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:00.338105    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:00.338105    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:00.338105    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:00 GMT
	I0229 18:51:00.338105    7340 round_trippers.go:580]     Audit-Id: fec2b014-9eca-4156-9eb7-fe0276dae5b0
	I0229 18:51:00.338105    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:00.338105    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:00.338216    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:51:00.338216    7340 node_ready.go:58] node "multinode-421600" has status "Ready":"False"
	I0229 18:51:00.824223    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:00.824223    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:00.824223    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:00.824223    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:00.824604    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:00.824604    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:00.824604    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:00.824604    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:00 GMT
	I0229 18:51:00.824604    7340 round_trippers.go:580]     Audit-Id: 431224ea-448d-4545-af81-9d83718fbd06
	I0229 18:51:00.824604    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:00.824604    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:00.824604    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:00.828921    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:51:01.323014    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:01.323116    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:01.323116    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:01.323116    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:01.323389    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:01.327184    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:01.327184    7340 round_trippers.go:580]     Audit-Id: a024b926-e6cf-4701-9f09-4c378ddbfa10
	I0229 18:51:01.327184    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:01.327184    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:01.327184    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:01.327184    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:01.327184    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:01 GMT
	I0229 18:51:01.327589    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:51:01.832861    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:01.832981    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:01.833072    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:01.833072    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:01.837439    7340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:51:01.837439    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:01.837439    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:01.837516    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:02 GMT
	I0229 18:51:01.837516    7340 round_trippers.go:580]     Audit-Id: 59849f61-bd9c-466d-9ad4-f327b9700247
	I0229 18:51:01.837516    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:01.837516    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:01.837516    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:01.837727    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:51:02.319133    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:02.319218    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:02.319218    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:02.319314    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:02.319537    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:02.319537    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:02.319537    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:02.323154    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:02.323154    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:02.323154    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:02 GMT
	I0229 18:51:02.323154    7340 round_trippers.go:580]     Audit-Id: 12f19692-441e-4b0f-828b-cb50b2980e61
	I0229 18:51:02.323154    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:02.323319    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:51:02.829185    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:02.829276    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:02.829276    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:02.829276    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:02.829627    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:02.833671    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:02.833671    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:02.833671    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:02.833671    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:02 GMT
	I0229 18:51:02.833671    7340 round_trippers.go:580]     Audit-Id: e15f2940-3f00-4c98-91ca-f1c33d4e4b41
	I0229 18:51:02.833671    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:02.833671    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:02.833826    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:51:02.834411    7340 node_ready.go:58] node "multinode-421600" has status "Ready":"False"
	I0229 18:51:03.323236    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:03.323236    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:03.323303    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:03.323303    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:03.323708    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:03.326748    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:03.326748    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:03.326748    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:03.326748    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:03.326748    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:03.326748    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:03 GMT
	I0229 18:51:03.326748    7340 round_trippers.go:580]     Audit-Id: ba7cb3de-9286-4575-af6b-6dcf7b55c7ba
	I0229 18:51:03.327065    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"331","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4926 chars]
	I0229 18:51:03.832385    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:03.832385    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:03.832385    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:03.832385    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:03.839943    7340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:51:03.839943    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:03.840052    7340 round_trippers.go:580]     Audit-Id: 537e69c3-b530-49da-a7bf-6497e843df02
	I0229 18:51:03.840052    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:03.840052    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:03.840052    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:03.840052    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:03.840052    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:04 GMT
	I0229 18:51:03.840383    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 18:51:03.840554    7340 node_ready.go:49] node "multinode-421600" has status "Ready":"True"
	I0229 18:51:03.840554    7340 node_ready.go:38] duration metric: took 12.5230713s waiting for node "multinode-421600" to be "Ready" ...
	I0229 18:51:03.841092    7340 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:51:03.841211    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods
	I0229 18:51:03.841304    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:03.841304    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:03.841304    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:03.842989    7340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:51:03.842989    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:03.842989    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:03.842989    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:03.842989    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:04 GMT
	I0229 18:51:03.848358    7340 round_trippers.go:580]     Audit-Id: 258e470a-ee21-4de7-82db-1f237500dc39
	I0229 18:51:03.848358    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:03.848358    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:03.849469    7340 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"397"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"396","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53932 chars]
	I0229 18:51:03.853698    7340 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace to be "Ready" ...
	I0229 18:51:03.854299    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 18:51:03.854299    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:03.854299    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:03.854299    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:03.854995    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:03.854995    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:03.858292    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:03.858292    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:03.858292    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:03.858345    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:03.858345    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:04 GMT
	I0229 18:51:03.858345    7340 round_trippers.go:580]     Audit-Id: c44e3e18-2856-4a0c-b1b3-1442c85c1665
	I0229 18:51:03.859612    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"396","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0229 18:51:03.860453    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:03.860453    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:03.860453    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:03.860453    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:03.863852    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:03.863852    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:03.863852    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:03.863852    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:03.863852    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:03.863852    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:04 GMT
	I0229 18:51:03.863852    7340 round_trippers.go:580]     Audit-Id: cc6021da-7696-43a9-af5b-0070a0099cc1
	I0229 18:51:03.863852    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:03.863852    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 18:51:04.361736    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 18:51:04.361736    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:04.361736    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:04.361736    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:04.365944    7340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:51:04.365944    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:04.365944    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:04.365944    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:04.365944    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:04.365944    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:04 GMT
	I0229 18:51:04.365944    7340 round_trippers.go:580]     Audit-Id: 5358f79a-93a6-40b0-9abe-eee098d46635
	I0229 18:51:04.365944    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:04.365944    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"396","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0229 18:51:04.367234    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:04.367234    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:04.367267    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:04.367267    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:04.369585    7340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:51:04.370451    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:04.370509    7340 round_trippers.go:580]     Audit-Id: fac46a62-2eeb-48a5-9326-98d4ce5e49bb
	I0229 18:51:04.370509    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:04.370509    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:04.370509    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:04.370509    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:04.370509    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:04 GMT
	I0229 18:51:04.370509    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 18:51:04.863258    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 18:51:04.863258    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:04.863258    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:04.863258    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:04.863784    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:04.863784    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:04.863784    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:04.867361    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:05 GMT
	I0229 18:51:04.867361    7340 round_trippers.go:580]     Audit-Id: fff54b40-7ddc-4ebf-8005-2c4d0483b8cd
	I0229 18:51:04.867361    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:04.867361    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:04.867361    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:04.867561    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"396","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0229 18:51:04.868308    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:04.868308    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:04.868308    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:04.868308    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:04.868573    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:04.871845    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:04.871845    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:05 GMT
	I0229 18:51:04.871845    7340 round_trippers.go:580]     Audit-Id: 5aa43898-5676-4405-99a3-89bdab640cb2
	I0229 18:51:04.871845    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:04.871845    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:04.871845    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:04.871845    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:04.872188    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 18:51:05.363984    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 18:51:05.363984    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:05.363984    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:05.363984    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:05.367776    7340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:51:05.367776    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:05.367776    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:05.367776    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:05.367776    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:05 GMT
	I0229 18:51:05.367776    7340 round_trippers.go:580]     Audit-Id: 6a34d9ae-1552-414b-8bea-8d07220c07bb
	I0229 18:51:05.367776    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:05.367776    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:05.368334    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"396","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0229 18:51:05.369139    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:05.369139    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:05.369139    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:05.369139    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:05.374544    7340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 18:51:05.374544    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:05.374544    7340 round_trippers.go:580]     Audit-Id: 0c5f9529-0bb1-4460-ad26-44d57320e3bf
	I0229 18:51:05.374544    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:05.374544    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:05.374544    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:05.374544    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:05.374544    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:05 GMT
	I0229 18:51:05.374544    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 18:51:05.869492    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 18:51:05.869586    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:05.869620    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:05.869620    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:05.870009    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:05.873222    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:05.873222    7340 round_trippers.go:580]     Audit-Id: b7ee8a7f-9cf2-4399-a6d6-c495e88f082b
	I0229 18:51:05.873258    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:05.873258    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:05.873258    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:05.873258    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:05.873258    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:06 GMT
	I0229 18:51:05.873575    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"396","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0229 18:51:05.874241    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:05.874317    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:05.874317    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:05.874317    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:05.877042    7340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:51:05.877042    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:05.877042    7340 round_trippers.go:580]     Audit-Id: e0649fb7-a082-41c5-8686-66ae60770b45
	I0229 18:51:05.878126    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:05.878126    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:05.878126    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:05.878168    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:05.878168    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:06 GMT
	I0229 18:51:05.878206    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 18:51:05.878736    7340 pod_ready.go:102] pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace has status "Ready":"False"
	I0229 18:51:06.367833    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 18:51:06.367833    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:06.367833    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:06.367833    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:06.373074    7340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 18:51:06.373074    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:06.373074    7340 round_trippers.go:580]     Audit-Id: 16480123-971b-4032-9df0-8eba3d2e0365
	I0229 18:51:06.373074    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:06.373074    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:06.373074    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:06.373074    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:06.373181    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:06 GMT
	I0229 18:51:06.373349    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"410","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6511 chars]
	I0229 18:51:06.374058    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:06.374121    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:06.374121    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:06.374121    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:06.378971    7340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:51:06.378971    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:06.378971    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:06.378971    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:06 GMT
	I0229 18:51:06.378971    7340 round_trippers.go:580]     Audit-Id: 064c5bdc-3b74-42d0-9cab-0ddde5a865ca
	I0229 18:51:06.380029    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:06.380029    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:06.380029    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:06.380252    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 18:51:06.862857    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 18:51:06.862857    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:06.862857    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:06.862857    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:06.866538    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:06.866538    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:06.866538    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:06.866538    7340 round_trippers.go:580]     Audit-Id: 9958ac5a-3d01-4b12-8757-a70751493549
	I0229 18:51:06.866538    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:06.866538    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:06.866538    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:06.866680    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:06.867038    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"410","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6511 chars]
	I0229 18:51:06.867519    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:06.868221    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:06.868221    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:06.868221    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:06.871627    7340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:51:06.871627    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:06.871627    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:06.871627    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:06.871627    7340 round_trippers.go:580]     Audit-Id: 717b1cd0-cd82-4599-bbe1-a548aceb41dc
	I0229 18:51:06.871627    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:06.871627    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:06.871627    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:06.871627    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 18:51:07.363523    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 18:51:07.363664    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:07.363664    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:07.363664    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:07.364212    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:07.367899    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:07.367899    7340 round_trippers.go:580]     Audit-Id: f8e1201e-5aa7-445d-8546-cceeb730def0
	I0229 18:51:07.367899    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:07.367899    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:07.367899    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:07.367899    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:07.367899    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:07.367899    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"415","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I0229 18:51:07.368514    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:07.368514    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:07.368514    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:07.368514    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:07.372178    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:07.372178    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:07.372265    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:07.372265    7340 round_trippers.go:580]     Audit-Id: 6b4696c2-f730-4cb7-85db-59bddcb261d1
	I0229 18:51:07.372265    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:07.372265    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:07.372265    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:07.372265    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:07.372503    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 18:51:07.372503    7340 pod_ready.go:92] pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace has status "Ready":"True"
	I0229 18:51:07.373042    7340 pod_ready.go:81] duration metric: took 3.51862s waiting for pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace to be "Ready" ...
	I0229 18:51:07.373197    7340 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:51:07.373197    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-421600
	I0229 18:51:07.373197    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:07.373197    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:07.373197    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:07.378787    7340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 18:51:07.378787    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:07.378787    7340 round_trippers.go:580]     Audit-Id: 6ddf5bc3-5403-47f7-bd81-20fa65ec49e4
	I0229 18:51:07.378787    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:07.378787    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:07.378787    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:07.378787    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:07.378787    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:07.378787    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-421600","namespace":"kube-system","uid":"a1147083-ea42-4f83-8bf0-24ab0f1f79fa","resourceVersion":"386","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.62.28:2379","kubernetes.io/config.hash":"cc377ea9919ea43502b39da82a7097ab","kubernetes.io/config.mirror":"cc377ea9919ea43502b39da82a7097ab","kubernetes.io/config.seen":"2024-02-29T18:50:38.626325846Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I0229 18:51:07.379398    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:07.379398    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:07.379398    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:07.379398    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:07.382789    7340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:51:07.382843    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:07.382843    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:07.382843    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:07.382843    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:07.382843    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:07.382843    7340 round_trippers.go:580]     Audit-Id: 135ad395-ec0b-4955-b2b1-581b13fa86f5
	I0229 18:51:07.382843    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:07.382950    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 18:51:07.382950    7340 pod_ready.go:92] pod "etcd-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 18:51:07.382950    7340 pod_ready.go:81] duration metric: took 9.7519ms waiting for pod "etcd-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:51:07.382950    7340 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:51:07.383521    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-421600
	I0229 18:51:07.383521    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:07.383521    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:07.383521    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:07.386100    7340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:51:07.386100    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:07.386100    7340 round_trippers.go:580]     Audit-Id: 80fea112-2c6f-4748-bf3d-fdd2ab066b3b
	I0229 18:51:07.386100    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:07.386100    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:07.386100    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:07.386100    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:07.386100    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:07.386100    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-421600","namespace":"kube-system","uid":"c2d5c1c0-2c5e-4070-832b-ae1e52d2e9a8","resourceVersion":"384","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.62.28:8443","kubernetes.io/config.hash":"3224776adbc0bdfa8ecf16b474e549a3","kubernetes.io/config.mirror":"3224776adbc0bdfa8ecf16b474e549a3","kubernetes.io/config.seen":"2024-02-29T18:50:38.626330946Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7390 chars]
	I0229 18:51:07.387534    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:07.387534    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:07.387577    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:07.387577    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:07.387733    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:07.387733    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:07.387733    7340 round_trippers.go:580]     Audit-Id: eec99021-56a3-4132-a932-fbba3ebc24f4
	I0229 18:51:07.387733    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:07.387733    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:07.387733    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:07.387733    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:07.387733    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:07.387733    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 18:51:07.387733    7340 pod_ready.go:92] pod "kube-apiserver-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 18:51:07.387733    7340 pod_ready.go:81] duration metric: took 4.783ms waiting for pod "kube-apiserver-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:51:07.387733    7340 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:51:07.387733    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-421600
	I0229 18:51:07.387733    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:07.387733    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:07.387733    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:07.391466    7340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:51:07.391466    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:07.391466    7340 round_trippers.go:580]     Audit-Id: 4964ca26-fa08-4833-8a7c-c03ce30bcdba
	I0229 18:51:07.391466    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:07.391466    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:07.391466    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:07.391466    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:07.391466    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:07.394289    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-421600","namespace":"kube-system","uid":"a41ee888-f6df-43d4-9799-67a9ef0b6c87","resourceVersion":"385","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dabef371df5cd2a8b883d06621dfc6bd","kubernetes.io/config.mirror":"dabef371df5cd2a8b883d06621dfc6bd","kubernetes.io/config.seen":"2024-02-29T18:50:38.626332146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6965 chars]
	I0229 18:51:07.394289    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:07.394846    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:07.394846    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:07.394846    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:07.395046    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:07.395046    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:07.395046    7340 round_trippers.go:580]     Audit-Id: 0fceb35c-9aa9-40b7-8491-c471f132e529
	I0229 18:51:07.398400    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:07.398400    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:07.398400    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:07.398400    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:07.398400    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:07.398690    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 18:51:07.399120    7340 pod_ready.go:92] pod "kube-controller-manager-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 18:51:07.399120    7340 pod_ready.go:81] duration metric: took 11.3862ms waiting for pod "kube-controller-manager-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:51:07.399154    7340 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpk6m" in "kube-system" namespace to be "Ready" ...
	I0229 18:51:07.399250    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpk6m
	I0229 18:51:07.399284    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:07.399284    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:07.399316    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:07.401772    7340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:51:07.402341    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:07.402341    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:07.402376    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:07.402376    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:07.402376    7340 round_trippers.go:580]     Audit-Id: 248d12e6-1594-467a-a195-c37eb6232d8d
	I0229 18:51:07.402376    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:07.402376    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:07.402376    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fpk6m","generateName":"kube-proxy-","namespace":"kube-system","uid":"4c99c6ec-5ab0-434d-b5a9-cb24b10f8bbf","resourceVersion":"366","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0229 18:51:07.402376    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:07.402376    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:07.402376    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:07.402376    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:07.404533    7340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:51:07.404533    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:07.404533    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:07.404533    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:07.404533    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:07.404533    7340 round_trippers.go:580]     Audit-Id: d05d80b5-b560-47e3-b8eb-0586d2772276
	I0229 18:51:07.404533    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:07.404533    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:07.406306    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 18:51:07.406470    7340 pod_ready.go:92] pod "kube-proxy-fpk6m" in "kube-system" namespace has status "Ready":"True"
	I0229 18:51:07.406470    7340 pod_ready.go:81] duration metric: took 7.2841ms waiting for pod "kube-proxy-fpk6m" in "kube-system" namespace to be "Ready" ...
	I0229 18:51:07.406470    7340 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:51:07.569319    7340 request.go:629] Waited for 162.5329ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-421600
	I0229 18:51:07.569319    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-421600
	I0229 18:51:07.569319    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:07.569319    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:07.569319    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:07.574373    7340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 18:51:07.574445    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:07.574445    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:07.574445    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:07.574523    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:07.574523    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:07.574523    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:07.574523    7340 round_trippers.go:580]     Audit-Id: 6595c26e-df5f-4ade-813c-e6772223ecac
	I0229 18:51:07.574838    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-421600","namespace":"kube-system","uid":"6742b97c-a3db-4fca-8da3-54fcde6d405a","resourceVersion":"383","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a2c94c0a4c322f0bf7fcafad0430344f","kubernetes.io/config.mirror":"a2c94c0a4c322f0bf7fcafad0430344f","kubernetes.io/config.seen":"2024-02-29T18:50:38.626333146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I0229 18:51:07.777702    7340 request.go:629] Waited for 201.948ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:07.778049    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:51:07.778049    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:07.778049    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:07.778049    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:07.778719    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:07.778719    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:07.778719    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:07.778719    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:07.782317    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:07.782317    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:07.782317    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:07.782317    7340 round_trippers.go:580]     Audit-Id: a449f1be-47a4-4bcc-be7d-6a0cc53b832a
	I0229 18:51:07.782644    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4781 chars]
	I0229 18:51:07.783029    7340 pod_ready.go:92] pod "kube-scheduler-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 18:51:07.783029    7340 pod_ready.go:81] duration metric: took 376.5374ms waiting for pod "kube-scheduler-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:51:07.783029    7340 pod_ready.go:38] duration metric: took 3.9417184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:51:07.783029    7340 api_server.go:52] waiting for apiserver process to appear ...
	I0229 18:51:07.792414    7340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 18:51:07.817114    7340 command_runner.go:130] > 2121
	I0229 18:51:07.817114    7340 api_server.go:72] duration metric: took 16.626123s to wait for apiserver process to appear ...
	I0229 18:51:07.817114    7340 api_server.go:88] waiting for apiserver healthz status ...
	I0229 18:51:07.817290    7340 api_server.go:253] Checking apiserver healthz at https://172.26.62.28:8443/healthz ...
	I0229 18:51:07.824975    7340 api_server.go:279] https://172.26.62.28:8443/healthz returned 200:
	ok
	I0229 18:51:07.826180    7340 round_trippers.go:463] GET https://172.26.62.28:8443/version
	I0229 18:51:07.826180    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:07.826180    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:07.826274    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:07.828134    7340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:51:07.828134    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:07.828134    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:07.828134    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:07.828134    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:07.828134    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:07.828134    7340 round_trippers.go:580]     Content-Length: 264
	I0229 18:51:07.828134    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:07 GMT
	I0229 18:51:07.828134    7340 round_trippers.go:580]     Audit-Id: c2059021-837d-4f36-92a6-ae73e1a3de18
	I0229 18:51:07.828134    7340 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 18:51:07.828134    7340 api_server.go:141] control plane version: v1.28.4
	I0229 18:51:07.828134    7340 api_server.go:131] duration metric: took 11.0191ms to wait for apiserver health ...
	I0229 18:51:07.828134    7340 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 18:51:07.965611    7340 request.go:629] Waited for 137.4238ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods
	I0229 18:51:07.965765    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods
	I0229 18:51:07.965765    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:07.965765    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:07.965765    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:07.966388    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:07.966388    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:07.966388    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:07.966388    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:08 GMT
	I0229 18:51:07.966388    7340 round_trippers.go:580]     Audit-Id: 3ebfbb2e-e387-4974-9717-b605154d7f0b
	I0229 18:51:07.966388    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:07.966388    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:07.966388    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:07.972195    7340 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"415","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54048 chars]
	I0229 18:51:07.974387    7340 system_pods.go:59] 8 kube-system pods found
	I0229 18:51:07.974953    7340 system_pods.go:61] "coredns-5dd5756b68-5qhb2" [cb647b50-f478-4265-9ff1-b66190c46393] Running
	I0229 18:51:07.974953    7340 system_pods.go:61] "etcd-multinode-421600" [a1147083-ea42-4f83-8bf0-24ab0f1f79fa] Running
	I0229 18:51:07.974953    7340 system_pods.go:61] "kindnet-447dh" [c2052338-6892-465a-b1d4-c4247c9ac2a0] Running
	I0229 18:51:07.974953    7340 system_pods.go:61] "kube-apiserver-multinode-421600" [c2d5c1c0-2c5e-4070-832b-ae1e52d2e9a8] Running
	I0229 18:51:07.974997    7340 system_pods.go:61] "kube-controller-manager-multinode-421600" [a41ee888-f6df-43d4-9799-67a9ef0b6c87] Running
	I0229 18:51:07.974997    7340 system_pods.go:61] "kube-proxy-fpk6m" [4c99c6ec-5ab0-434d-b5a9-cb24b10f8bbf] Running
	I0229 18:51:07.974997    7340 system_pods.go:61] "kube-scheduler-multinode-421600" [6742b97c-a3db-4fca-8da3-54fcde6d405a] Running
	I0229 18:51:07.974997    7340 system_pods.go:61] "storage-provisioner" [98ad07fa-8673-4933-9197-b7ceb8a3afbc] Running
	I0229 18:51:07.975031    7340 system_pods.go:74] duration metric: took 146.8889ms to wait for pod list to return data ...
	I0229 18:51:07.975031    7340 default_sa.go:34] waiting for default service account to be created ...
	I0229 18:51:08.170146    7340 request.go:629] Waited for 194.7614ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.62.28:8443/api/v1/namespaces/default/serviceaccounts
	I0229 18:51:08.170413    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/default/serviceaccounts
	I0229 18:51:08.170413    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:08.170413    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:08.170413    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:08.170754    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:08.174351    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:08.174351    7340 round_trippers.go:580]     Content-Length: 261
	I0229 18:51:08.174351    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:08 GMT
	I0229 18:51:08.174351    7340 round_trippers.go:580]     Audit-Id: 4287c985-983c-48b8-b434-1e14ecabe1d6
	I0229 18:51:08.174351    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:08.174351    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:08.174351    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:08.174440    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:08.174440    7340 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"3e667406-4272-4c92-bf6d-ce7b6f584082","resourceVersion":"302","creationTimestamp":"2024-02-29T18:50:50Z"}}]}
	I0229 18:51:08.174678    7340 default_sa.go:45] found service account: "default"
	I0229 18:51:08.174766    7340 default_sa.go:55] duration metric: took 199.724ms for default service account to be created ...
	I0229 18:51:08.174766    7340 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 18:51:08.376030    7340 request.go:629] Waited for 200.9317ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods
	I0229 18:51:08.376238    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods
	I0229 18:51:08.376238    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:08.376238    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:08.376238    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:08.376942    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:08.376942    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:08.381380    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:08.381380    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:08.381380    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:08.381380    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:08 GMT
	I0229 18:51:08.381380    7340 round_trippers.go:580]     Audit-Id: 9628e2d7-cf02-4d07-b026-17817d8d4bfb
	I0229 18:51:08.381380    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:08.384053    7340 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"415","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54048 chars]
	I0229 18:51:08.387739    7340 system_pods.go:86] 8 kube-system pods found
	I0229 18:51:08.387817    7340 system_pods.go:89] "coredns-5dd5756b68-5qhb2" [cb647b50-f478-4265-9ff1-b66190c46393] Running
	I0229 18:51:08.387817    7340 system_pods.go:89] "etcd-multinode-421600" [a1147083-ea42-4f83-8bf0-24ab0f1f79fa] Running
	I0229 18:51:08.387817    7340 system_pods.go:89] "kindnet-447dh" [c2052338-6892-465a-b1d4-c4247c9ac2a0] Running
	I0229 18:51:08.387817    7340 system_pods.go:89] "kube-apiserver-multinode-421600" [c2d5c1c0-2c5e-4070-832b-ae1e52d2e9a8] Running
	I0229 18:51:08.387817    7340 system_pods.go:89] "kube-controller-manager-multinode-421600" [a41ee888-f6df-43d4-9799-67a9ef0b6c87] Running
	I0229 18:51:08.387817    7340 system_pods.go:89] "kube-proxy-fpk6m" [4c99c6ec-5ab0-434d-b5a9-cb24b10f8bbf] Running
	I0229 18:51:08.387895    7340 system_pods.go:89] "kube-scheduler-multinode-421600" [6742b97c-a3db-4fca-8da3-54fcde6d405a] Running
	I0229 18:51:08.387996    7340 system_pods.go:89] "storage-provisioner" [98ad07fa-8673-4933-9197-b7ceb8a3afbc] Running
	I0229 18:51:08.387996    7340 system_pods.go:126] duration metric: took 213.2177ms to wait for k8s-apps to be running ...
	I0229 18:51:08.387996    7340 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:51:08.395068    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:51:08.421274    7340 system_svc.go:56] duration metric: took 31.7757ms WaitForService to wait for kubelet.
	I0229 18:51:08.421274    7340 kubeadm.go:581] duration metric: took 17.2302486s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:51:08.421274    7340 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:51:08.578744    7340 request.go:629] Waited for 157.2815ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.62.28:8443/api/v1/nodes
	I0229 18:51:08.579108    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes
	I0229 18:51:08.579146    7340 round_trippers.go:469] Request Headers:
	I0229 18:51:08.579146    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:51:08.579146    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:51:08.579784    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:51:08.579784    7340 round_trippers.go:577] Response Headers:
	I0229 18:51:08.579784    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:51:08.579784    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:51:08.579784    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:51:08.579784    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:51:08.579784    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:51:08 GMT
	I0229 18:51:08.579784    7340 round_trippers.go:580]     Audit-Id: 4ce0835e-f7c1-429b-a0cd-6a773884375d
	I0229 18:51:08.585248    7340 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"391","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4834 chars]
	I0229 18:51:08.585827    7340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:51:08.585827    7340 node_conditions.go:123] node cpu capacity is 2
	I0229 18:51:08.585889    7340 node_conditions.go:105] duration metric: took 164.6064ms to run NodePressure ...
	I0229 18:51:08.585889    7340 start.go:228] waiting for startup goroutines ...
	I0229 18:51:08.585889    7340 start.go:233] waiting for cluster config update ...
	I0229 18:51:08.585957    7340 start.go:242] writing updated cluster config ...
	I0229 18:51:08.587707    7340 out.go:177] 
	I0229 18:51:08.598364    7340 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:51:08.598364    7340 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\config.json ...
	I0229 18:51:08.601379    7340 out.go:177] * Starting worker node multinode-421600-m02 in cluster multinode-421600
	I0229 18:51:08.601973    7340 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 18:51:08.601973    7340 cache.go:56] Caching tarball of preloaded images
	I0229 18:51:08.602704    7340 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 18:51:08.602704    7340 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 18:51:08.602704    7340 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\config.json ...
	I0229 18:51:08.612913    7340 start.go:365] acquiring machines lock for multinode-421600-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 18:51:08.612913    7340 start.go:369] acquired machines lock for "multinode-421600-m02" in 0s
	I0229 18:51:08.612913    7340 start.go:93] Provisioning new machine with config: &{Name:multinode-421600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-421600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.26.62.28 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 18:51:08.612913    7340 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0229 18:51:08.612913    7340 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 18:51:08.614315    7340 start.go:159] libmachine.API.Create for "multinode-421600" (driver="hyperv")
	I0229 18:51:08.614515    7340 client.go:168] LocalClient.Create starting
	I0229 18:51:08.614660    7340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 18:51:08.614660    7340 main.go:141] libmachine: Decoding PEM data...
	I0229 18:51:08.614660    7340 main.go:141] libmachine: Parsing certificate...
	I0229 18:51:08.615321    7340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 18:51:08.615554    7340 main.go:141] libmachine: Decoding PEM data...
	I0229 18:51:08.615554    7340 main.go:141] libmachine: Parsing certificate...
	I0229 18:51:08.615554    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 18:51:10.367193    7340 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 18:51:10.367193    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:10.367193    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 18:51:11.969148    7340 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 18:51:11.969148    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:11.969148    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 18:51:13.357484    7340 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 18:51:13.357484    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:13.364871    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 18:51:16.713330    7340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 18:51:16.713330    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:16.714883    7340 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 18:51:17.119619    7340 main.go:141] libmachine: Creating SSH key...
	I0229 18:51:17.338205    7340 main.go:141] libmachine: Creating VM...
	I0229 18:51:17.338205    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 18:51:19.946195    7340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 18:51:19.946195    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:19.946195    7340 main.go:141] libmachine: Using switch "Default Switch"
	I0229 18:51:19.946195    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 18:51:21.584452    7340 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 18:51:21.584452    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:21.584452    7340 main.go:141] libmachine: Creating VHD
	I0229 18:51:21.584452    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 18:51:25.113877    7340 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 7574E210-74B5-4D32-88A1-EBBB00C14AE5
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 18:51:25.113877    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:25.113877    7340 main.go:141] libmachine: Writing magic tar header
	I0229 18:51:25.113877    7340 main.go:141] libmachine: Writing SSH key tar header
	I0229 18:51:25.124386    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 18:51:28.084574    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:51:28.084574    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:28.084574    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\disk.vhd' -SizeBytes 20000MB
	I0229 18:51:30.465053    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:51:30.465053    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:30.475486    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-421600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0229 18:51:33.714358    7340 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-421600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 18:51:33.714358    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:33.725045    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-421600-m02 -DynamicMemoryEnabled $false
	I0229 18:51:35.765327    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:51:35.765327    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:35.765327    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-421600-m02 -Count 2
	I0229 18:51:37.771831    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:51:37.771831    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:37.782458    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-421600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\boot2docker.iso'
	I0229 18:51:40.177776    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:51:40.177776    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:40.177776    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-421600-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\disk.vhd'
	I0229 18:51:42.557275    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:51:42.565881    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:42.565881    7340 main.go:141] libmachine: Starting VM...
	I0229 18:51:42.565881    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-421600-m02
	I0229 18:51:45.168866    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:51:45.168957    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:45.168957    7340 main.go:141] libmachine: Waiting for host to start...
	I0229 18:51:45.169007    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:51:47.208245    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:51:47.209143    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:47.209214    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:51:49.511547    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:51:49.511733    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:50.524332    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:51:52.564590    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:51:52.564590    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:52.564590    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:51:54.893459    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:51:54.893724    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:55.897526    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:51:57.888095    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:51:57.888095    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:51:57.888290    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:52:00.195323    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:52:00.201487    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:01.208286    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:52:03.193244    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:52:03.193244    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:03.193339    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:52:05.450807    7340 main.go:141] libmachine: [stdout =====>] : 
	I0229 18:52:05.450807    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:06.452956    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:52:08.428009    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:52:08.439212    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:08.439366    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:52:10.797598    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:52:10.808443    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:10.808443    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:52:12.727671    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:52:12.737532    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:12.737532    7340 machine.go:88] provisioning docker machine ...
	I0229 18:52:12.737631    7340 buildroot.go:166] provisioning hostname "multinode-421600-m02"
	I0229 18:52:12.737682    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:52:14.692843    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:52:14.692843    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:14.703095    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:52:17.043383    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:52:17.043383    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:17.057467    7340 main.go:141] libmachine: Using SSH client type: native
	I0229 18:52:17.066965    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.56.47 22 <nil> <nil>}
	I0229 18:52:17.066965    7340 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-421600-m02 && echo "multinode-421600-m02" | sudo tee /etc/hostname
	I0229 18:52:17.228167    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-421600-m02
	
	I0229 18:52:17.228167    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:52:19.165448    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:52:19.165448    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:19.176162    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:52:21.494865    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:52:21.494865    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:21.508734    7340 main.go:141] libmachine: Using SSH client type: native
	I0229 18:52:21.509080    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.56.47 22 <nil> <nil>}
	I0229 18:52:21.509180    7340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-421600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-421600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-421600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 18:52:21.674295    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 18:52:21.674488    7340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 18:52:21.674488    7340 buildroot.go:174] setting up certificates
	I0229 18:52:21.674488    7340 provision.go:83] configureAuth start
	I0229 18:52:21.674488    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:52:23.617065    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:52:23.617065    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:23.617065    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:52:25.968107    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:52:25.968107    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:25.968107    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:52:27.902218    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:52:27.902218    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:27.912933    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:52:30.262853    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:52:30.273175    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:30.273175    7340 provision.go:138] copyHostCerts
	I0229 18:52:30.273318    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 18:52:30.273589    7340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 18:52:30.273589    7340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 18:52:30.273875    7340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 18:52:30.274744    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 18:52:30.274878    7340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 18:52:30.274878    7340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 18:52:30.275132    7340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 18:52:30.275849    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 18:52:30.276128    7340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 18:52:30.276128    7340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 18:52:30.276395    7340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 18:52:30.277194    7340 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-421600-m02 san=[172.26.56.47 172.26.56.47 localhost 127.0.0.1 minikube multinode-421600-m02]
	I0229 18:52:30.469167    7340 provision.go:172] copyRemoteCerts
	I0229 18:52:30.471264    7340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 18:52:30.471264    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:52:32.461992    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:52:32.461992    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:32.462063    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:52:34.770136    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:52:34.770136    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:34.780293    7340 sshutil.go:53] new ssh client: &{IP:172.26.56.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\id_rsa Username:docker}
	I0229 18:52:34.887338    7340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4158291s)
	I0229 18:52:34.887338    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 18:52:34.889907    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 18:52:34.934588    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 18:52:34.935009    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0229 18:52:34.971236    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 18:52:34.980536    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 18:52:35.024378    7340 provision.go:86] duration metric: configureAuth took 13.3491504s
	I0229 18:52:35.024378    7340 buildroot.go:189] setting minikube options for container-runtime
	I0229 18:52:35.024901    7340 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:52:35.024981    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:52:36.957406    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:52:36.957406    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:36.957406    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:52:39.294057    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:52:39.294162    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:39.301958    7340 main.go:141] libmachine: Using SSH client type: native
	I0229 18:52:39.302245    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.56.47 22 <nil> <nil>}
	I0229 18:52:39.302245    7340 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 18:52:39.439062    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 18:52:39.439062    7340 buildroot.go:70] root file system type: tmpfs
	I0229 18:52:39.439344    7340 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 18:52:39.439433    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:52:41.388449    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:52:41.388449    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:41.398514    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:52:43.717422    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:52:43.717422    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:43.731685    7340 main.go:141] libmachine: Using SSH client type: native
	I0229 18:52:43.732112    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.56.47 22 <nil> <nil>}
	I0229 18:52:43.732218    7340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.26.62.28"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 18:52:43.905083    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.26.62.28
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 18:52:43.905177    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:52:45.794123    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:52:45.804871    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:45.804871    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:52:48.176463    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:52:48.183871    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:48.189403    7340 main.go:141] libmachine: Using SSH client type: native
	I0229 18:52:48.189928    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.56.47 22 <nil> <nil>}
	I0229 18:52:48.190043    7340 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 18:52:49.190076    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 18:52:49.190134    7340 machine.go:91] provisioned docker machine in 36.4505826s
	I0229 18:52:49.190188    7340 client.go:171] LocalClient.Create took 1m40.5700424s
	I0229 18:52:49.190188    7340 start.go:167] duration metric: libmachine.API.Create for "multinode-421600" took 1m40.5702965s
	I0229 18:52:49.190256    7340 start.go:300] post-start starting for "multinode-421600-m02" (driver="hyperv")
	I0229 18:52:49.190256    7340 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 18:52:49.198670    7340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 18:52:49.198670    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:52:51.136950    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:52:51.136950    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:51.136950    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:52:53.462522    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:52:53.462522    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:53.472643    7340 sshutil.go:53] new ssh client: &{IP:172.26.56.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\id_rsa Username:docker}
	I0229 18:52:53.585235    7340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3862723s)
	I0229 18:52:53.593578    7340 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 18:52:53.600559    7340 command_runner.go:130] > NAME=Buildroot
	I0229 18:52:53.600559    7340 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 18:52:53.600559    7340 command_runner.go:130] > ID=buildroot
	I0229 18:52:53.600559    7340 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 18:52:53.600559    7340 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 18:52:53.600790    7340 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 18:52:53.600830    7340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 18:52:53.601117    7340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 18:52:53.601317    7340 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> 43562.pem in /etc/ssl/certs
	I0229 18:52:53.601317    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> /etc/ssl/certs/43562.pem
	I0229 18:52:53.610927    7340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 18:52:53.628641    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /etc/ssl/certs/43562.pem (1708 bytes)
	I0229 18:52:53.671825    7340 start.go:303] post-start completed in 4.4810778s
	I0229 18:52:53.674480    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:52:55.639624    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:52:55.639624    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:55.639850    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:52:57.961120    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:52:57.961120    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:57.971262    7340 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\config.json ...
	I0229 18:52:57.973489    7340 start.go:128] duration metric: createHost completed in 1m49.35441s
	I0229 18:52:57.973489    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:52:59.896562    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:52:59.906536    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:52:59.906536    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:53:02.255240    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:53:02.255485    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:53:02.259325    7340 main.go:141] libmachine: Using SSH client type: native
	I0229 18:53:02.259712    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.56.47 22 <nil> <nil>}
	I0229 18:53:02.259712    7340 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 18:53:02.399058    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709232782.560915328
	
	I0229 18:53:02.399058    7340 fix.go:206] guest clock: 1709232782.560915328
	I0229 18:53:02.399058    7340 fix.go:219] Guest: 2024-02-29 18:53:02.560915328 +0000 UTC Remote: 2024-02-29 18:52:57.9734897 +0000 UTC m=+305.827667801 (delta=4.587425628s)
	I0229 18:53:02.399163    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:53:04.343409    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:53:04.343409    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:53:04.343681    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:53:06.654115    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:53:06.654115    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:53:06.668931    7340 main.go:141] libmachine: Using SSH client type: native
	I0229 18:53:06.669593    7340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.56.47 22 <nil> <nil>}
	I0229 18:53:06.669593    7340 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709232782
	I0229 18:53:06.815109    7340 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 18:53:02 UTC 2024
	
	I0229 18:53:06.815191    7340 fix.go:226] clock set: Thu Feb 29 18:53:02 UTC 2024
	 (err=<nil>)
	I0229 18:53:06.815282    7340 start.go:83] releasing machines lock for "multinode-421600-m02", held for 1m58.1957253s
	I0229 18:53:06.815488    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:53:08.753898    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:53:08.753898    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:53:08.764037    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:53:11.111133    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:53:11.121412    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:53:11.122047    7340 out.go:177] * Found network options:
	I0229 18:53:11.122275    7340 out.go:177]   - NO_PROXY=172.26.62.28
	W0229 18:53:11.123042    7340 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 18:53:11.123651    7340 out.go:177]   - NO_PROXY=172.26.62.28
	W0229 18:53:11.124168    7340 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 18:53:11.125343    7340 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 18:53:11.127969    7340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 18:53:11.127969    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:53:11.139386    7340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 18:53:11.139386    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 18:53:13.102460    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:53:13.112312    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:53:13.112430    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:53:13.152819    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:53:13.152819    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:53:13.153170    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 18:53:15.531656    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:53:15.531656    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:53:15.541822    7340 sshutil.go:53] new ssh client: &{IP:172.26.56.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\id_rsa Username:docker}
	I0229 18:53:15.553562    7340 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 18:53:15.560649    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:53:15.560899    7340 sshutil.go:53] new ssh client: &{IP:172.26.56.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\id_rsa Username:docker}
	I0229 18:53:15.640348    7340 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0229 18:53:15.646894    7340 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5072586s)
	W0229 18:53:15.647014    7340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 18:53:15.655673    7340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 18:53:15.742452    7340 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 18:53:15.742612    7340 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6143876s)
	I0229 18:53:15.742612    7340 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 18:53:15.742612    7340 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 18:53:15.742771    7340 start.go:475] detecting cgroup driver to use...
	I0229 18:53:15.742931    7340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:53:15.776324    7340 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 18:53:15.790535    7340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 18:53:15.818503    7340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 18:53:15.838017    7340 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 18:53:15.845866    7340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 18:53:15.875458    7340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:53:15.905229    7340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 18:53:15.932585    7340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 18:53:15.960767    7340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 18:53:15.989316    7340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 18:53:16.016420    7340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 18:53:16.022563    7340 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 18:53:16.043503    7340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 18:53:16.070755    7340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:53:16.273060    7340 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 18:53:16.304827    7340 start.go:475] detecting cgroup driver to use...
	I0229 18:53:16.313540    7340 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 18:53:16.339656    7340 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 18:53:16.339719    7340 command_runner.go:130] > [Unit]
	I0229 18:53:16.339719    7340 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 18:53:16.339719    7340 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 18:53:16.339783    7340 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 18:53:16.339783    7340 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 18:53:16.339783    7340 command_runner.go:130] > StartLimitBurst=3
	I0229 18:53:16.339783    7340 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 18:53:16.339783    7340 command_runner.go:130] > [Service]
	I0229 18:53:16.339783    7340 command_runner.go:130] > Type=notify
	I0229 18:53:16.339783    7340 command_runner.go:130] > Restart=on-failure
	I0229 18:53:16.339783    7340 command_runner.go:130] > Environment=NO_PROXY=172.26.62.28
	I0229 18:53:16.339842    7340 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 18:53:16.339842    7340 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 18:53:16.339842    7340 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 18:53:16.339842    7340 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 18:53:16.339906    7340 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 18:53:16.339906    7340 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 18:53:16.339906    7340 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 18:53:16.339906    7340 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 18:53:16.339977    7340 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 18:53:16.339977    7340 command_runner.go:130] > ExecStart=
	I0229 18:53:16.339977    7340 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 18:53:16.340048    7340 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 18:53:16.340048    7340 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 18:53:16.340048    7340 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 18:53:16.340048    7340 command_runner.go:130] > LimitNOFILE=infinity
	I0229 18:53:16.340048    7340 command_runner.go:130] > LimitNPROC=infinity
	I0229 18:53:16.340048    7340 command_runner.go:130] > LimitCORE=infinity
	I0229 18:53:16.340048    7340 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 18:53:16.340121    7340 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 18:53:16.340121    7340 command_runner.go:130] > TasksMax=infinity
	I0229 18:53:16.340121    7340 command_runner.go:130] > TimeoutStartSec=0
	I0229 18:53:16.340121    7340 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 18:53:16.340121    7340 command_runner.go:130] > Delegate=yes
	I0229 18:53:16.340121    7340 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 18:53:16.340121    7340 command_runner.go:130] > KillMode=process
	I0229 18:53:16.340193    7340 command_runner.go:130] > [Install]
	I0229 18:53:16.340193    7340 command_runner.go:130] > WantedBy=multi-user.target
	I0229 18:53:16.349225    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:53:16.379216    7340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 18:53:16.412085    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 18:53:16.449487    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:53:16.482529    7340 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 18:53:16.533829    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 18:53:16.557448    7340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 18:53:16.589085    7340 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 18:53:16.598806    7340 ssh_runner.go:195] Run: which cri-dockerd
	I0229 18:53:16.604905    7340 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 18:53:16.613973    7340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 18:53:16.623720    7340 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 18:53:16.673097    7340 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 18:53:16.869001    7340 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 18:53:17.047928    7340 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 18:53:17.048151    7340 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 18:53:17.087914    7340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:53:17.261145    7340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 18:53:18.738821    7340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4775426s)
	I0229 18:53:18.748368    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 18:53:18.786212    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:53:18.816235    7340 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 18:53:19.000945    7340 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 18:53:19.195980    7340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:53:19.378757    7340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 18:53:19.416122    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 18:53:19.446355    7340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 18:53:19.643834    7340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 18:53:19.734207    7340 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 18:53:19.747144    7340 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 18:53:19.756230    7340 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 18:53:19.756300    7340 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 18:53:19.756300    7340 command_runner.go:130] > Device: 0,22	Inode: 894         Links: 1
	I0229 18:53:19.756300    7340 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 18:53:19.756300    7340 command_runner.go:130] > Access: 2024-02-29 18:53:19.830222465 +0000
	I0229 18:53:19.756300    7340 command_runner.go:130] > Modify: 2024-02-29 18:53:19.830222465 +0000
	I0229 18:53:19.756300    7340 command_runner.go:130] > Change: 2024-02-29 18:53:19.834222598 +0000
	I0229 18:53:19.756300    7340 command_runner.go:130] >  Birth: -
	I0229 18:53:19.756300    7340 start.go:543] Will wait 60s for crictl version
	I0229 18:53:19.767376    7340 ssh_runner.go:195] Run: which crictl
	I0229 18:53:19.770476    7340 command_runner.go:130] > /usr/bin/crictl
	I0229 18:53:19.781442    7340 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 18:53:19.845955    7340 command_runner.go:130] > Version:  0.1.0
	I0229 18:53:19.847726    7340 command_runner.go:130] > RuntimeName:  docker
	I0229 18:53:19.847803    7340 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 18:53:19.847803    7340 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 18:53:19.848168    7340 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 18:53:19.856597    7340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:53:19.888667    7340 command_runner.go:130] > 24.0.7
	I0229 18:53:19.900455    7340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 18:53:19.930933    7340 command_runner.go:130] > 24.0.7
	I0229 18:53:19.933405    7340 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 18:53:19.934313    7340 out.go:177]   - env NO_PROXY=172.26.62.28
	I0229 18:53:19.934935    7340 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 18:53:19.939001    7340 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 18:53:19.939080    7340 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 18:53:19.939080    7340 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 18:53:19.939080    7340 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:76:3f:19 Flags:up|broadcast|multicast|running}
	I0229 18:53:19.941310    7340 ip.go:210] interface addr: fe80::841a:4367:8c9:abc/64
	I0229 18:53:19.941310    7340 ip.go:210] interface addr: 172.26.48.1/20
	I0229 18:53:19.947669    7340 ssh_runner.go:195] Run: grep 172.26.48.1	host.minikube.internal$ /etc/hosts
	I0229 18:53:19.952358    7340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:53:19.968727    7340 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600 for IP: 172.26.56.47
	I0229 18:53:19.976597    7340 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 18:53:19.977220    7340 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 18:53:19.977431    7340 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 18:53:19.977715    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 18:53:19.978246    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 18:53:19.978579    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 18:53:19.978824    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 18:53:19.979756    7340 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem (1338 bytes)
	W0229 18:53:19.979990    7340 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356_empty.pem, impossibly tiny 0 bytes
	I0229 18:53:19.979990    7340 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 18:53:19.980680    7340 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 18:53:19.980846    7340 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 18:53:19.980846    7340 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 18:53:19.982135    7340 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem (1708 bytes)
	I0229 18:53:19.982516    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> /usr/share/ca-certificates/43562.pem
	I0229 18:53:19.982751    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:53:19.982823    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem -> /usr/share/ca-certificates/4356.pem
	I0229 18:53:19.984041    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 18:53:20.029237    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 18:53:20.074038    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 18:53:20.117026    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 18:53:20.160074    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /usr/share/ca-certificates/43562.pem (1708 bytes)
	I0229 18:53:20.204925    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 18:53:20.249445    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem --> /usr/share/ca-certificates/4356.pem (1338 bytes)
	I0229 18:53:20.305172    7340 ssh_runner.go:195] Run: openssl version
	I0229 18:53:20.308132    7340 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 18:53:20.323629    7340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43562.pem && ln -fs /usr/share/ca-certificates/43562.pem /etc/ssl/certs/43562.pem"
	I0229 18:53:20.353187    7340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43562.pem
	I0229 18:53:20.356146    7340 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 17:55 /usr/share/ca-certificates/43562.pem
	I0229 18:53:20.360789    7340 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:55 /usr/share/ca-certificates/43562.pem
	I0229 18:53:20.371321    7340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43562.pem
	I0229 18:53:20.379318    7340 command_runner.go:130] > 3ec20f2e
	I0229 18:53:20.389239    7340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43562.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 18:53:20.416866    7340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 18:53:20.443648    7340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:53:20.445830    7340 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:53:20.451077    7340 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:53:20.458489    7340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 18:53:20.474556    7340 command_runner.go:130] > b5213941
	I0229 18:53:20.484907    7340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 18:53:20.513337    7340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4356.pem && ln -fs /usr/share/ca-certificates/4356.pem /etc/ssl/certs/4356.pem"
	I0229 18:53:20.541115    7340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4356.pem
	I0229 18:53:20.549307    7340 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 17:55 /usr/share/ca-certificates/4356.pem
	I0229 18:53:20.549655    7340 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:55 /usr/share/ca-certificates/4356.pem
	I0229 18:53:20.558619    7340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4356.pem
	I0229 18:53:20.568685    7340 command_runner.go:130] > 51391683
	I0229 18:53:20.578071    7340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4356.pem /etc/ssl/certs/51391683.0"
	I0229 18:53:20.613090    7340 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 18:53:20.616230    7340 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:53:20.619892    7340 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 18:53:20.628776    7340 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 18:53:20.663977    7340 command_runner.go:130] > cgroupfs
	I0229 18:53:20.664848    7340 cni.go:84] Creating CNI manager for ""
	I0229 18:53:20.664848    7340 cni.go:136] 2 nodes found, recommending kindnet
	I0229 18:53:20.664939    7340 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 18:53:20.664939    7340 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.56.47 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-421600 NodeName:multinode-421600-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.62.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.56.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 18:53:20.665160    7340 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.56.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-421600-m02"
	  kubeletExtraArgs:
	    node-ip: 172.26.56.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.62.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 18:53:20.665233    7340 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-421600-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.56.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-421600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 18:53:20.673958    7340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 18:53:20.691650    7340 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0229 18:53:20.691650    7340 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0229 18:53:20.699783    7340 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0229 18:53:20.720019    7340 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0229 18:53:20.720019    7340 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0229 18:53:20.720019    7340 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0229 18:53:21.669748    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0229 18:53:21.680298    7340 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0229 18:53:21.690798    7340 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0229 18:53:21.695575    7340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0229 18:53:21.695753    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0229 18:53:25.171119    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0229 18:53:25.178608    7340 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0229 18:53:25.185994    7340 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0229 18:53:25.190762    7340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0229 18:53:25.190970    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0229 18:53:28.783771    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:53:28.811089    7340 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0229 18:53:28.820923    7340 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0229 18:53:28.822649    7340 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0229 18:53:28.827617    7340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0229 18:53:28.827712    7340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0229 18:53:29.442518    7340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0229 18:53:29.459774    7340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0229 18:53:29.489652    7340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 18:53:29.531198    7340 ssh_runner.go:195] Run: grep 172.26.62.28	control-plane.minikube.internal$ /etc/hosts
	I0229 18:53:29.538053    7340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.62.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 18:53:29.559885    7340 host.go:66] Checking if "multinode-421600" exists ...
	I0229 18:53:29.560456    7340 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:53:29.560784    7340 start.go:304] JoinCluster: &{Name:multinode-421600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-421600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.26.62.28 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.56.47 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 18:53:29.561019    7340 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0229 18:53:29.561072    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 18:53:31.506724    7340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 18:53:31.506724    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:53:31.506997    7340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 18:53:33.841641    7340 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 18:53:33.841641    7340 main.go:141] libmachine: [stderr =====>] : 
	I0229 18:53:33.851088    7340 sshutil.go:53] new ssh client: &{IP:172.26.62.28 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 18:53:34.024356    7340 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zi5o60.xwyii5k9p7h7i3yl --discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e 
	I0229 18:53:34.027375    7340 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.4661087s)
	I0229 18:53:34.027375    7340 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.26.56.47 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 18:53:34.027375    7340 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zi5o60.xwyii5k9p7h7i3yl --discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-421600-m02"
	I0229 18:53:34.085233    7340 command_runner.go:130] ! W0229 18:53:34.248374    1327 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0229 18:53:34.250723    7340 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 18:53:36.538433    7340 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 18:53:36.538433    7340 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0229 18:53:36.538433    7340 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0229 18:53:36.538433    7340 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 18:53:36.538433    7340 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 18:53:36.538433    7340 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 18:53:36.538433    7340 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0229 18:53:36.538433    7340 command_runner.go:130] > This node has joined the cluster:
	I0229 18:53:36.538433    7340 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0229 18:53:36.538433    7340 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0229 18:53:36.538433    7340 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0229 18:53:36.538433    7340 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zi5o60.xwyii5k9p7h7i3yl --discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-421600-m02": (2.5109185s)
	I0229 18:53:36.538433    7340 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0229 18:53:36.735845    7340 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0229 18:53:36.920396    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=multinode-421600 minikube.k8s.io/updated_at=2024_02_29T18_53_36_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 18:53:37.038826    7340 command_runner.go:130] > node/multinode-421600-m02 labeled
	I0229 18:53:37.038938    7340 start.go:306] JoinCluster complete in 7.4777396s
	I0229 18:53:37.039001    7340 cni.go:84] Creating CNI manager for ""
	I0229 18:53:37.039001    7340 cni.go:136] 2 nodes found, recommending kindnet
	I0229 18:53:37.049720    7340 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 18:53:37.059274    7340 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 18:53:37.059326    7340 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 18:53:37.059326    7340 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 18:53:37.059355    7340 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 18:53:37.059355    7340 command_runner.go:130] > Access: 2024-02-29 18:48:57.853544100 +0000
	I0229 18:53:37.059355    7340 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 18:53:37.059355    7340 command_runner.go:130] > Change: 2024-02-29 18:48:48.933000000 +0000
	I0229 18:53:37.059355    7340 command_runner.go:130] >  Birth: -
	I0229 18:53:37.059355    7340 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 18:53:37.059355    7340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 18:53:37.103104    7340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 18:53:37.455745    7340 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 18:53:37.455745    7340 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 18:53:37.455745    7340 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 18:53:37.455745    7340 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 18:53:37.456286    7340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 18:53:37.457128    7340 kapi.go:59] client config for multinode-421600: &rest.Config{Host:"https://172.26.62.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:53:37.457925    7340 round_trippers.go:463] GET https://172.26.62.28:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 18:53:37.457997    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:37.457997    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:37.457997    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:37.470430    7340 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0229 18:53:37.471982    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:37.471982    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:37.471982    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:37.471982    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:37.471982    7340 round_trippers.go:580]     Content-Length: 291
	I0229 18:53:37.471982    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:37 GMT
	I0229 18:53:37.471982    7340 round_trippers.go:580]     Audit-Id: 03064cc3-19b6-4fa5-9e23-84b937837c25
	I0229 18:53:37.471982    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:37.472074    7340 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9192a23-827d-4114-8861-df907bfdc0ef","resourceVersion":"419","creationTimestamp":"2024-02-29T18:50:38Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 18:53:37.472155    7340 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-421600" context rescaled to 1 replicas
	I0229 18:53:37.472155    7340 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.26.56.47 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 18:53:37.472899    7340 out.go:177] * Verifying Kubernetes components...
	I0229 18:53:37.485300    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:53:37.510645    7340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 18:53:37.511467    7340 kapi.go:59] client config for multinode-421600: &rest.Config{Host:"https://172.26.62.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 18:53:37.512326    7340 node_ready.go:35] waiting up to 6m0s for node "multinode-421600-m02" to be "Ready" ...
	I0229 18:53:37.512617    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:37.512617    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:37.512710    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:37.512710    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:37.515625    7340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:53:37.515625    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:37.515625    7340 round_trippers.go:580]     Audit-Id: cd0df3ac-6354-4dbe-b417-ccbbdab2d646
	I0229 18:53:37.515625    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:37.515625    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:37.515625    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:37.515625    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:37.516985    7340 round_trippers.go:580]     Content-Length: 4035
	I0229 18:53:37.516985    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:37 GMT
	I0229 18:53:37.517293    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"564","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 3011 chars]
	I0229 18:53:38.019169    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:38.019169    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:38.019169    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:38.019169    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:38.023417    7340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:53:38.023417    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:38.023417    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:38.023417    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:38.023417    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:38.023417    7340 round_trippers.go:580]     Content-Length: 4035
	I0229 18:53:38.023417    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:38 GMT
	I0229 18:53:38.023417    7340 round_trippers.go:580]     Audit-Id: 072cbe82-8be0-4b47-8745-e17690f0e7f3
	I0229 18:53:38.023417    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:38.023417    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"564","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 3011 chars]
	I0229 18:53:38.521460    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:38.521540    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:38.521624    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:38.521699    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:38.525388    7340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:53:38.526171    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:38.526206    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:38.526240    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:38.526240    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:38.526240    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:38.526240    7340 round_trippers.go:580]     Content-Length: 4035
	I0229 18:53:38.526240    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:38 GMT
	I0229 18:53:38.526240    7340 round_trippers.go:580]     Audit-Id: 173618a8-f23c-4503-8f39-991a981e263d
	I0229 18:53:38.526539    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"564","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 3011 chars]
	I0229 18:53:39.018849    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:39.018919    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:39.018919    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:39.018919    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:39.022343    7340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:53:39.022638    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:39.022638    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:39 GMT
	I0229 18:53:39.022638    7340 round_trippers.go:580]     Audit-Id: aaf36cda-ad89-4b89-942c-24f0e2583ddd
	I0229 18:53:39.022638    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:39.022638    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:39.022638    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:39.022638    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:39.022721    7340 round_trippers.go:580]     Content-Length: 4035
	I0229 18:53:39.022763    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"564","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 3011 chars]
	I0229 18:53:39.531306    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:39.531410    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:39.531410    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:39.531410    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:39.535304    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:39.535304    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:39.535304    7340 round_trippers.go:580]     Content-Length: 4035
	I0229 18:53:39.535304    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:39 GMT
	I0229 18:53:39.535304    7340 round_trippers.go:580]     Audit-Id: 5a158004-7892-4d61-bf95-2ff2e0338975
	I0229 18:53:39.535304    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:39.535304    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:39.535304    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:39.535304    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:39.535304    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"564","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 3011 chars]
	I0229 18:53:39.535949    7340 node_ready.go:58] node "multinode-421600-m02" has status "Ready":"False"
	I0229 18:53:40.034145    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:40.034237    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:40.034299    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:40.034299    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:40.036349    7340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:53:40.036349    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:40.036349    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:40.036349    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:40.036349    7340 round_trippers.go:580]     Content-Length: 4035
	I0229 18:53:40.036349    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:40 GMT
	I0229 18:53:40.036349    7340 round_trippers.go:580]     Audit-Id: ff707cc8-a298-4840-8b3c-ab2d88ceef9a
	I0229 18:53:40.036349    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:40.036349    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:40.038562    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"564","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 3011 chars]
	I0229 18:53:40.516839    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:40.516905    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:40.516905    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:40.516905    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:40.522784    7340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 18:53:40.522784    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:40.522784    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:40.522784    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:40.522784    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:40.522784    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:40.522784    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:40 GMT
	I0229 18:53:40.522784    7340 round_trippers.go:580]     Audit-Id: 17b3cb64-52fd-41a7-baea-b03739781f2e
	I0229 18:53:40.522784    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"571","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 18:53:41.028247    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:41.028327    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:41.028327    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:41.028327    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:41.032320    7340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:53:41.032320    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:41.032403    7340 round_trippers.go:580]     Audit-Id: b68e1fd5-3655-4fc3-ae9b-7cd186e3a830
	I0229 18:53:41.032403    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:41.032403    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:41.032403    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:41.032403    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:41.032449    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:41 GMT
	I0229 18:53:41.032926    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"571","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 18:53:41.525920    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:41.525920    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:41.525920    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:41.525920    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:41.526648    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:41.526648    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:41.526648    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:41.526648    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:41 GMT
	I0229 18:53:41.526648    7340 round_trippers.go:580]     Audit-Id: 953c1bdc-fb53-4979-9b50-48a381929865
	I0229 18:53:41.526648    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:41.526648    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:41.526648    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:41.530103    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"571","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 18:53:42.020814    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:42.020966    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:42.020966    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:42.020966    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:42.021361    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:42.021361    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:42.021361    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:42.021361    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:42 GMT
	I0229 18:53:42.021361    7340 round_trippers.go:580]     Audit-Id: a17bf390-6dce-4a27-9095-fdaf92fe5c1b
	I0229 18:53:42.021361    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:42.021361    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:42.021361    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:42.025490    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"571","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 18:53:42.025917    7340 node_ready.go:58] node "multinode-421600-m02" has status "Ready":"False"
	I0229 18:53:42.522641    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:42.522895    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:42.522895    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:42.522895    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:42.524286    7340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:53:42.526917    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:42.526917    7340 round_trippers.go:580]     Audit-Id: 26bafd79-3fde-48cd-a9d6-8c56d4196b31
	I0229 18:53:42.526917    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:42.526997    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:42.526997    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:42.526997    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:42.526997    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:42 GMT
	I0229 18:53:42.527175    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"571","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 18:53:43.022351    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:43.022351    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:43.022351    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:43.022351    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:43.022931    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:43.022931    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:43.022931    7340 round_trippers.go:580]     Audit-Id: 0438e282-faca-4679-a0b1-962e139ba9e1
	I0229 18:53:43.022931    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:43.022931    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:43.022931    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:43.022931    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:43.022931    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:43 GMT
	I0229 18:53:43.026625    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"571","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 18:53:43.530790    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:43.530790    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:43.530880    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:43.530880    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:43.531092    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:43.531092    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:43.531092    7340 round_trippers.go:580]     Audit-Id: c158ba42-2178-4bd1-a1aa-1cbf1cc2bc4e
	I0229 18:53:43.531092    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:43.531092    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:43.531092    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:43.535396    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:43.535396    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:43 GMT
	I0229 18:53:43.535581    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"571","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 18:53:44.027778    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:44.027778    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:44.027871    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:44.027871    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:44.028047    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:44.028047    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:44.028047    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:44.028047    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:44.028047    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:44 GMT
	I0229 18:53:44.028047    7340 round_trippers.go:580]     Audit-Id: c9fbb1f9-9045-462e-88e3-48b05f47cad9
	I0229 18:53:44.028047    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:44.028047    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:44.032661    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"571","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 18:53:44.033039    7340 node_ready.go:58] node "multinode-421600-m02" has status "Ready":"False"
	I0229 18:53:44.528471    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:44.528471    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:44.528471    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:44.528570    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:44.528851    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:44.528851    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:44.528851    7340 round_trippers.go:580]     Audit-Id: 7ec8eb3f-6f93-4224-a6c0-d186f14e6be4
	I0229 18:53:44.528851    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:44.528851    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:44.528851    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:44.528851    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:44.528851    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:44 GMT
	I0229 18:53:44.532662    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"571","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 18:53:45.031112    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:45.031200    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:45.031200    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:45.031200    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:45.032703    7340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:53:45.035447    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:45.035447    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:45.035447    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:45 GMT
	I0229 18:53:45.035447    7340 round_trippers.go:580]     Audit-Id: 4f73ed6b-4494-47a7-81cc-f704915f2161
	I0229 18:53:45.035447    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:45.035447    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:45.035447    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:45.035714    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"571","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 18:53:45.519398    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:45.519398    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:45.519398    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:45.519595    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:45.523657    7340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:53:45.523732    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:45.523732    7340 round_trippers.go:580]     Audit-Id: 205dec46-ab9a-4e8a-a5c5-ad78e0962a86
	I0229 18:53:45.523732    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:45.523804    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:45.523804    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:45.523804    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:45.523804    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:45 GMT
	I0229 18:53:45.524257    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"571","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 18:53:46.020973    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:46.021042    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:46.021042    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:46.021042    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:46.023392    7340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:53:46.024777    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:46.024777    7340 round_trippers.go:580]     Audit-Id: d25c0d6b-3104-43cf-820d-a67136e72e9f
	I0229 18:53:46.024777    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:46.024777    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:46.024777    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:46.024777    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:46.024777    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:46 GMT
	I0229 18:53:46.025123    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"571","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 18:53:46.527273    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:46.527303    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:46.527365    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:46.527396    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:46.527701    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:46.531000    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:46.531000    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:46.531000    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:46 GMT
	I0229 18:53:46.531000    7340 round_trippers.go:580]     Audit-Id: cc516bac-48b7-4de7-80a8-d21027a38b5e
	I0229 18:53:46.531000    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:46.531000    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:46.531000    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:46.531589    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"571","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3120 chars]
	I0229 18:53:46.531589    7340 node_ready.go:58] node "multinode-421600-m02" has status "Ready":"False"
	I0229 18:53:47.020281    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:47.020506    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:47.020506    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:47.020506    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:47.021369    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:47.024506    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:47.024506    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:47.024506    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:47.024506    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:47.024506    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:47 GMT
	I0229 18:53:47.024563    7340 round_trippers.go:580]     Audit-Id: f566b686-d788-4e2f-b59b-a7584037fa4e
	I0229 18:53:47.024563    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:47.024563    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:47.522057    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:47.522057    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:47.522057    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:47.522057    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:47.522477    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:47.526449    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:47.526449    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:47.526449    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:47.526528    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:47 GMT
	I0229 18:53:47.526528    7340 round_trippers.go:580]     Audit-Id: 804e615b-3abe-4d73-8e9f-7df313ea20c4
	I0229 18:53:47.526528    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:47.526528    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:47.526713    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:48.014896    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:48.015212    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:48.015257    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:48.015257    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:48.015653    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:48.015653    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:48.015653    7340 round_trippers.go:580]     Audit-Id: 20d6009a-7049-45de-90e1-32102b0b243b
	I0229 18:53:48.015653    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:48.015653    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:48.015653    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:48.015653    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:48.015653    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:48 GMT
	I0229 18:53:48.015653    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:48.517785    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:48.517785    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:48.517785    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:48.517785    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:48.518416    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:48.518416    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:48.521481    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:48.521481    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:48 GMT
	I0229 18:53:48.521481    7340 round_trippers.go:580]     Audit-Id: de11a115-c9da-4b8c-921b-b4950378334d
	I0229 18:53:48.521481    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:48.521481    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:48.521481    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:48.521821    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:49.032928    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:49.032928    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:49.032928    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:49.032928    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:49.033484    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:49.033484    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:49.033484    7340 round_trippers.go:580]     Audit-Id: 0f4ef982-088b-4305-9e24-1b278ba066f8
	I0229 18:53:49.037008    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:49.037008    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:49.037008    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:49.037008    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:49.037008    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:49 GMT
	I0229 18:53:49.037199    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:49.037570    7340 node_ready.go:58] node "multinode-421600-m02" has status "Ready":"False"
	I0229 18:53:49.530840    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:49.530928    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:49.530928    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:49.530928    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:49.531303    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:49.531303    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:49.531303    7340 round_trippers.go:580]     Audit-Id: 741c5bce-5604-4968-a9a5-e5b1c79feb44
	I0229 18:53:49.531303    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:49.531303    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:49.531303    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:49.534820    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:49.534820    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:49 GMT
	I0229 18:53:49.534921    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:50.014534    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:50.014637    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:50.014637    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:50.014637    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:50.018602    7340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:53:50.018602    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:50.018602    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:50.018602    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:50.018602    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:50 GMT
	I0229 18:53:50.018602    7340 round_trippers.go:580]     Audit-Id: fc0610d5-8ad3-4d78-b12f-a4769cb6b003
	I0229 18:53:50.018602    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:50.018602    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:50.018602    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:50.519891    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:50.519967    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:50.519967    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:50.519967    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:50.527355    7340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 18:53:50.528031    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:50.528031    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:50.528031    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:50.528031    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:50 GMT
	I0229 18:53:50.528031    7340 round_trippers.go:580]     Audit-Id: 841c038d-d29a-406c-9e68-36d4f44c9373
	I0229 18:53:50.528031    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:50.528031    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:50.528031    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:51.027633    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:51.027633    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:51.027633    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:51.027633    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:51.033255    7340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 18:53:51.033255    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:51.033255    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:51 GMT
	I0229 18:53:51.033255    7340 round_trippers.go:580]     Audit-Id: ea815381-5154-4951-b774-49f1342c30db
	I0229 18:53:51.033255    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:51.033255    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:51.033255    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:51.033255    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:51.033816    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:51.522152    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:51.522152    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:51.522152    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:51.522152    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:51.525259    7340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:53:51.525259    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:51.526336    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:51.526336    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:51.526336    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:51 GMT
	I0229 18:53:51.526336    7340 round_trippers.go:580]     Audit-Id: d3c9c2ca-d371-4199-b61f-8996b3cc39e9
	I0229 18:53:51.526398    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:51.526398    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:51.526398    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:51.526924    7340 node_ready.go:58] node "multinode-421600-m02" has status "Ready":"False"
	I0229 18:53:52.026464    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:52.026464    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:52.026464    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:52.026464    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:52.029202    7340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:53:52.030460    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:52.030460    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:52.030460    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:52.030460    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:52.030460    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:52 GMT
	I0229 18:53:52.030460    7340 round_trippers.go:580]     Audit-Id: 67e66391-ecc5-4888-b004-11b85fa3b1c4
	I0229 18:53:52.030460    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:52.030748    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:52.533283    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:52.533283    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:52.533283    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:52.533283    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:52.533919    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:52.533919    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:52.533919    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:52 GMT
	I0229 18:53:52.536982    7340 round_trippers.go:580]     Audit-Id: 82585a90-2228-4bcf-b9c5-4b20ec147b57
	I0229 18:53:52.536982    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:52.536982    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:52.536982    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:52.536982    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:52.537189    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:53.021672    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:53.021773    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:53.021773    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:53.021867    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:53.022139    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:53.022139    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:53.022139    7340 round_trippers.go:580]     Audit-Id: cf774bcf-bc7f-401e-bb5e-8100bdcad116
	I0229 18:53:53.022139    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:53.022139    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:53.025762    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:53.025762    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:53.025762    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:53 GMT
	I0229 18:53:53.025956    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:53.516064    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:53.516148    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:53.516148    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:53.516148    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:53.516463    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:53.516463    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:53.516463    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:53.516463    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:53.516463    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:53.516463    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:53 GMT
	I0229 18:53:53.516463    7340 round_trippers.go:580]     Audit-Id: 1abfea18-2612-42a4-9f70-112c69300f98
	I0229 18:53:53.516463    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:53.520144    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:54.015029    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:54.015105    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:54.015105    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:54.015105    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:54.015643    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:54.015643    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:54.015643    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:54.015643    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:54.015643    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:54 GMT
	I0229 18:53:54.015643    7340 round_trippers.go:580]     Audit-Id: d8fc41bb-e68f-43e5-af27-5623e698bee0
	I0229 18:53:54.015643    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:54.015643    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:54.019131    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:54.019484    7340 node_ready.go:58] node "multinode-421600-m02" has status "Ready":"False"
	I0229 18:53:54.519987    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:54.520076    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:54.520076    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:54.520076    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:54.520346    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:54.524818    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:54.524818    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:54.524818    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:54.524818    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:54.524818    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:54 GMT
	I0229 18:53:54.524818    7340 round_trippers.go:580]     Audit-Id: fd45935f-aec0-4b02-b33b-82ee2825969d
	I0229 18:53:54.524818    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:54.525044    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"583","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3389 chars]
	I0229 18:53:55.018471    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:55.018583    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:55.018583    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:55.018583    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:55.018915    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:55.018915    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:55.018915    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:55.018915    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:55.018915    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:55.018915    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:55 GMT
	I0229 18:53:55.018915    7340 round_trippers.go:580]     Audit-Id: 881194e4-9b75-4d6b-b493-a1b3538fde72
	I0229 18:53:55.018915    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:55.022495    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"598","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3255 chars]
	I0229 18:53:55.022614    7340 node_ready.go:49] node "multinode-421600-m02" has status "Ready":"True"
	I0229 18:53:55.022614    7340 node_ready.go:38] duration metric: took 17.5092243s waiting for node "multinode-421600-m02" to be "Ready" ...
	I0229 18:53:55.022614    7340 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:53:55.023018    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods
	I0229 18:53:55.023018    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:55.023018    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:55.023018    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:55.023136    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:55.029098    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:55.029098    7340 round_trippers.go:580]     Audit-Id: aa74f6c3-2967-4f53-a275-a95f480ca74e
	I0229 18:53:55.029098    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:55.029098    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:55.029098    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:55.029194    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:55.029194    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:55 GMT
	I0229 18:53:55.030441    7340 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"598"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"415","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67426 chars]
	I0229 18:53:55.033542    7340 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace to be "Ready" ...
	I0229 18:53:55.033662    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 18:53:55.033662    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:55.033662    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:55.033662    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:55.036623    7340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 18:53:55.036623    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:55.036623    7340 round_trippers.go:580]     Audit-Id: 09aa6368-63f2-4568-8a31-593d02b7db3b
	I0229 18:53:55.036623    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:55.036623    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:55.036623    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:55.036623    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:55.036623    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:55 GMT
	I0229 18:53:55.037526    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"415","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6282 chars]
	I0229 18:53:55.038145    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:53:55.038242    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:55.038242    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:55.038242    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:55.039106    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:55.039106    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:55.039106    7340 round_trippers.go:580]     Audit-Id: d9ef7e3c-33aa-4003-834b-13946b2a5f7e
	I0229 18:53:55.041336    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:55.041336    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:55.041336    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:55.041398    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:55.041398    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:55 GMT
	I0229 18:53:55.041725    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"422","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 18:53:55.042114    7340 pod_ready.go:92] pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace has status "Ready":"True"
	I0229 18:53:55.042146    7340 pod_ready.go:81] duration metric: took 8.6037ms waiting for pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace to be "Ready" ...
	I0229 18:53:55.042146    7340 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:53:55.042247    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-421600
	I0229 18:53:55.042289    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:55.042289    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:55.042325    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:55.049336    7340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 18:53:55.049380    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:55.049380    7340 round_trippers.go:580]     Audit-Id: 976a4ded-1a1d-4640-b803-2629da51bbad
	I0229 18:53:55.049380    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:55.049380    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:55.049421    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:55.049421    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:55.049421    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:55 GMT
	I0229 18:53:55.049421    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-421600","namespace":"kube-system","uid":"a1147083-ea42-4f83-8bf0-24ab0f1f79fa","resourceVersion":"386","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.62.28:2379","kubernetes.io/config.hash":"cc377ea9919ea43502b39da82a7097ab","kubernetes.io/config.mirror":"cc377ea9919ea43502b39da82a7097ab","kubernetes.io/config.seen":"2024-02-29T18:50:38.626325846Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5852 chars]
	I0229 18:53:55.050102    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:53:55.050102    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:55.050102    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:55.050102    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:55.053406    7340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:53:55.053406    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:55.053406    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:55.053406    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:55 GMT
	I0229 18:53:55.053406    7340 round_trippers.go:580]     Audit-Id: 3e98ca5f-b894-49dc-b653-32f78e3c81d9
	I0229 18:53:55.053406    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:55.053406    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:55.053406    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:55.053406    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"422","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 18:53:55.054000    7340 pod_ready.go:92] pod "etcd-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 18:53:55.054000    7340 pod_ready.go:81] duration metric: took 11.8096ms waiting for pod "etcd-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:53:55.054000    7340 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:53:55.054000    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-421600
	I0229 18:53:55.054000    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:55.054000    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:55.054000    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:55.057260    7340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:53:55.057260    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:55.057260    7340 round_trippers.go:580]     Audit-Id: 7e7324d2-df8b-46a2-af61-ba3636ab3305
	I0229 18:53:55.057260    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:55.057260    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:55.057260    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:55.057260    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:55.057260    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:55 GMT
	I0229 18:53:55.057260    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-421600","namespace":"kube-system","uid":"c2d5c1c0-2c5e-4070-832b-ae1e52d2e9a8","resourceVersion":"384","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.62.28:8443","kubernetes.io/config.hash":"3224776adbc0bdfa8ecf16b474e549a3","kubernetes.io/config.mirror":"3224776adbc0bdfa8ecf16b474e549a3","kubernetes.io/config.seen":"2024-02-29T18:50:38.626330946Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7390 chars]
	I0229 18:53:55.057906    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:53:55.057906    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:55.057906    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:55.057906    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:55.061077    7340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 18:53:55.061077    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:55.061077    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:55 GMT
	I0229 18:53:55.061077    7340 round_trippers.go:580]     Audit-Id: bcb239f8-a3ea-4e78-857e-1fd9ac994b55
	I0229 18:53:55.061077    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:55.061077    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:55.061077    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:55.061225    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:55.061304    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"422","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 18:53:55.061304    7340 pod_ready.go:92] pod "kube-apiserver-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 18:53:55.061304    7340 pod_ready.go:81] duration metric: took 7.304ms waiting for pod "kube-apiserver-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:53:55.061304    7340 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:53:55.061929    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-421600
	I0229 18:53:55.061929    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:55.061929    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:55.061929    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:55.063532    7340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:53:55.065017    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:55.065017    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:55.065017    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:55.065017    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:55 GMT
	I0229 18:53:55.065081    7340 round_trippers.go:580]     Audit-Id: bdcd920e-55c7-4a13-aae8-1a6cc988a7b6
	I0229 18:53:55.065081    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:55.065081    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:55.065286    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-421600","namespace":"kube-system","uid":"a41ee888-f6df-43d4-9799-67a9ef0b6c87","resourceVersion":"385","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dabef371df5cd2a8b883d06621dfc6bd","kubernetes.io/config.mirror":"dabef371df5cd2a8b883d06621dfc6bd","kubernetes.io/config.seen":"2024-02-29T18:50:38.626332146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6965 chars]
	I0229 18:53:55.065286    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:53:55.065286    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:55.065286    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:55.065286    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:55.066545    7340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:53:55.066545    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:55.066545    7340 round_trippers.go:580]     Audit-Id: 37c590e0-60cd-44dd-bb14-561ec62d2337
	I0229 18:53:55.066545    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:55.066545    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:55.066545    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:55.066545    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:55.066545    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:55 GMT
	I0229 18:53:55.068565    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"422","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 18:53:55.068643    7340 pod_ready.go:92] pod "kube-controller-manager-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 18:53:55.068643    7340 pod_ready.go:81] duration metric: took 7.3389ms waiting for pod "kube-controller-manager-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:53:55.068643    7340 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7c7xc" in "kube-system" namespace to be "Ready" ...
	I0229 18:53:55.219906    7340 request.go:629] Waited for 151.2547ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7c7xc
	I0229 18:53:55.220315    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7c7xc
	I0229 18:53:55.220400    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:55.220400    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:55.220400    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:55.221196    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:55.221196    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:55.221196    7340 round_trippers.go:580]     Audit-Id: 9b399b07-704d-42b1-93b9-efda439a041c
	I0229 18:53:55.221196    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:55.224468    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:55.224468    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:55.224468    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:55.224468    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:55 GMT
	I0229 18:53:55.224585    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7c7xc","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f8e7fe9-d8e5-47ca-80fd-7e5f7ae43140","resourceVersion":"579","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0229 18:53:55.424385    7340 request.go:629] Waited for 199.2025ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:55.424385    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600-m02
	I0229 18:53:55.424385    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:55.424385    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:55.424725    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:55.425785    7340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 18:53:55.428693    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:55.428693    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:55.428734    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:55.428734    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:55.428734    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:55.428734    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:55 GMT
	I0229 18:53:55.428798    7340 round_trippers.go:580]     Audit-Id: 162b4b18-7195-4d7e-b0f9-6ff8e076b699
	I0229 18:53:55.428938    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"600","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T18_53_36_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":" [truncated 3135 chars]
	I0229 18:53:55.429463    7340 pod_ready.go:92] pod "kube-proxy-7c7xc" in "kube-system" namespace has status "Ready":"True"
	I0229 18:53:55.429506    7340 pod_ready.go:81] duration metric: took 360.8424ms waiting for pod "kube-proxy-7c7xc" in "kube-system" namespace to be "Ready" ...
	I0229 18:53:55.429506    7340 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpk6m" in "kube-system" namespace to be "Ready" ...
	I0229 18:53:55.626787    7340 request.go:629] Waited for 197.1143ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpk6m
	I0229 18:53:55.627120    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpk6m
	I0229 18:53:55.627120    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:55.627120    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:55.627120    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:55.627504    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:55.627504    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:55.627504    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:55 GMT
	I0229 18:53:55.627504    7340 round_trippers.go:580]     Audit-Id: 351fbcab-7d6d-4537-aaf0-50a79aa38e3a
	I0229 18:53:55.627504    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:55.627504    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:55.627504    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:55.627504    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:55.631331    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fpk6m","generateName":"kube-proxy-","namespace":"kube-system","uid":"4c99c6ec-5ab0-434d-b5a9-cb24b10f8bbf","resourceVersion":"366","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0229 18:53:55.830820    7340 request.go:629] Waited for 198.507ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:53:55.830935    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:53:55.830935    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:55.830935    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:55.830935    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:55.831422    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:55.831422    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:55.831422    7340 round_trippers.go:580]     Audit-Id: 1b1cdf9d-8507-4a51-b8a9-78568e879098
	I0229 18:53:55.831422    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:55.831422    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:55.831422    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:55.831422    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:55.831422    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:55 GMT
	I0229 18:53:55.835452    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"422","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 18:53:55.835934    7340 pod_ready.go:92] pod "kube-proxy-fpk6m" in "kube-system" namespace has status "Ready":"True"
	I0229 18:53:55.835934    7340 pod_ready.go:81] duration metric: took 406.4058ms waiting for pod "kube-proxy-fpk6m" in "kube-system" namespace to be "Ready" ...
	I0229 18:53:55.835934    7340 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:53:56.023875    7340 request.go:629] Waited for 187.8389ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-421600
	I0229 18:53:56.024086    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-421600
	I0229 18:53:56.024086    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:56.024180    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:56.024180    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:56.024710    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:56.024710    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:56.024710    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:56 GMT
	I0229 18:53:56.024710    7340 round_trippers.go:580]     Audit-Id: 19fd9294-6701-4226-9eb1-653060cb47cf
	I0229 18:53:56.024710    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:56.024710    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:56.024710    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:56.028226    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:56.028319    7340 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-421600","namespace":"kube-system","uid":"6742b97c-a3db-4fca-8da3-54fcde6d405a","resourceVersion":"383","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a2c94c0a4c322f0bf7fcafad0430344f","kubernetes.io/config.mirror":"a2c94c0a4c322f0bf7fcafad0430344f","kubernetes.io/config.seen":"2024-02-29T18:50:38.626333146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4695 chars]
	I0229 18:53:56.232328    7340 request.go:629] Waited for 203.357ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:53:56.232603    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes/multinode-421600
	I0229 18:53:56.232709    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:56.232709    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:56.232709    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:56.237045    7340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 18:53:56.237045    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:56.237127    7340 round_trippers.go:580]     Audit-Id: aef92f0b-2af2-4b3d-ba19-f48635142515
	I0229 18:53:56.237127    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:56.237127    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:56.237127    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:56.237127    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:56.237127    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:56 GMT
	I0229 18:53:56.237282    7340 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"422","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","fi [truncated 4957 chars]
	I0229 18:53:56.237883    7340 pod_ready.go:92] pod "kube-scheduler-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 18:53:56.237883    7340 pod_ready.go:81] duration metric: took 401.9263ms waiting for pod "kube-scheduler-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 18:53:56.237883    7340 pod_ready.go:38] duration metric: took 1.2148988s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 18:53:56.237960    7340 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 18:53:56.246922    7340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 18:53:56.270898    7340 system_svc.go:56] duration metric: took 32.9358ms WaitForService to wait for kubelet.
	I0229 18:53:56.270898    7340 kubeadm.go:581] duration metric: took 18.7977016s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 18:53:56.270898    7340 node_conditions.go:102] verifying NodePressure condition ...
	I0229 18:53:56.418198    7340 request.go:629] Waited for 147.2925ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.62.28:8443/api/v1/nodes
	I0229 18:53:56.418407    7340 round_trippers.go:463] GET https://172.26.62.28:8443/api/v1/nodes
	I0229 18:53:56.418664    7340 round_trippers.go:469] Request Headers:
	I0229 18:53:56.418664    7340 round_trippers.go:473]     Accept: application/json, */*
	I0229 18:53:56.418664    7340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 18:53:56.419110    7340 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0229 18:53:56.419110    7340 round_trippers.go:577] Response Headers:
	I0229 18:53:56.419110    7340 round_trippers.go:580]     Audit-Id: ad42023a-8ef6-4b2c-b4de-96d163fd545e
	I0229 18:53:56.419110    7340 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 18:53:56.419110    7340 round_trippers.go:580]     Content-Type: application/json
	I0229 18:53:56.419110    7340 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 18:53:56.419110    7340 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 18:53:56.419110    7340 round_trippers.go:580]     Date: Thu, 29 Feb 2024 18:53:56 GMT
	I0229 18:53:56.423593    7340 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"600"},"items":[{"metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"422","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9137 chars]
	I0229 18:53:56.424308    7340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:53:56.424308    7340 node_conditions.go:123] node cpu capacity is 2
	I0229 18:53:56.424308    7340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 18:53:56.424382    7340 node_conditions.go:123] node cpu capacity is 2
	I0229 18:53:56.424382    7340 node_conditions.go:105] duration metric: took 153.4754ms to run NodePressure ...
	I0229 18:53:56.424382    7340 start.go:228] waiting for startup goroutines ...
	I0229 18:53:56.424456    7340 start.go:242] writing updated cluster config ...
	I0229 18:53:56.433525    7340 ssh_runner.go:195] Run: rm -f paused
	I0229 18:53:56.564147    7340 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 18:53:56.564764    7340 out.go:177] * Done! kubectl is now configured to use "multinode-421600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 29 18:51:04 multinode-421600 dockerd[1284]: time="2024-02-29T18:51:04.087448969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:51:04 multinode-421600 cri-dockerd[1172]: time="2024-02-29T18:51:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f53a12cbddd58f25452708552d7b73e5cc5b4c0c4c2d07be70b7ee5c6fbc20a5/resolv.conf as [nameserver 172.26.48.1]"
	Feb 29 18:51:04 multinode-421600 dockerd[1284]: time="2024-02-29T18:51:04.268698861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:51:04 multinode-421600 dockerd[1284]: time="2024-02-29T18:51:04.269360973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:51:04 multinode-421600 dockerd[1284]: time="2024-02-29T18:51:04.269482875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:51:04 multinode-421600 dockerd[1284]: time="2024-02-29T18:51:04.269914383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:51:05 multinode-421600 dockerd[1284]: time="2024-02-29T18:51:05.574112295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:51:05 multinode-421600 dockerd[1284]: time="2024-02-29T18:51:05.575466020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:51:05 multinode-421600 dockerd[1284]: time="2024-02-29T18:51:05.575869827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:51:05 multinode-421600 dockerd[1284]: time="2024-02-29T18:51:05.576078731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:51:05 multinode-421600 cri-dockerd[1172]: time="2024-02-29T18:51:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f4d0b06ecf4a6cae737e5041f089e491f9319897540283322f5fd5b28e4e8486/resolv.conf as [nameserver 172.26.48.1]"
	Feb 29 18:51:05 multinode-421600 dockerd[1284]: time="2024-02-29T18:51:05.868875279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:51:05 multinode-421600 dockerd[1284]: time="2024-02-29T18:51:05.869024383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:51:05 multinode-421600 dockerd[1284]: time="2024-02-29T18:51:05.869045884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:51:05 multinode-421600 dockerd[1284]: time="2024-02-29T18:51:05.869159987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:54:19 multinode-421600 dockerd[1284]: time="2024-02-29T18:54:19.373060045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:54:19 multinode-421600 dockerd[1284]: time="2024-02-29T18:54:19.373551969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:54:19 multinode-421600 dockerd[1284]: time="2024-02-29T18:54:19.373593172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:54:19 multinode-421600 dockerd[1284]: time="2024-02-29T18:54:19.373866385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:54:19 multinode-421600 cri-dockerd[1172]: time="2024-02-29T18:54:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/957fdff4fb39ad9d8fcd05ffe1d9a3648860705df05f4c4e76c5e8ff99fbfdf6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 29 18:54:20 multinode-421600 cri-dockerd[1172]: time="2024-02-29T18:54:20Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Feb 29 18:54:20 multinode-421600 dockerd[1284]: time="2024-02-29T18:54:20.661550998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 18:54:20 multinode-421600 dockerd[1284]: time="2024-02-29T18:54:20.661970523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 18:54:20 multinode-421600 dockerd[1284]: time="2024-02-29T18:54:20.662016526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 18:54:20 multinode-421600 dockerd[1284]: time="2024-02-29T18:54:20.662592360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f23bdec6fb5c7       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   46 seconds ago      Running             busybox                   0                   957fdff4fb39a       busybox-5b5d89c9d6-4lvtb
	7be33bccda15c       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   0                   f4d0b06ecf4a6       coredns-5dd5756b68-5qhb2
	8f42b1a35229e       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   f53a12cbddd58       storage-provisioner
	92f6a9511f4fe       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   779c3df146b26       kindnet-447dh
	2f8a25ce65da1       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                0                   39324e6654181       kube-proxy-fpk6m
	9245396d3b64c       73deb9a3f7025                                                                                         4 minutes ago       Running             etcd                      0                   1ae101209a8f8       etcd-multinode-421600
	ea0adcda4ba9f       7fe0e6f37db33                                                                                         4 minutes ago       Running             kube-apiserver            0                   7f9c423f4482e       kube-apiserver-multinode-421600
	52fe82a87fa81       d058aa5ab969c                                                                                         4 minutes ago       Running             kube-controller-manager   0                   d9fcf1cc8d350       kube-controller-manager-multinode-421600
	b8c8786727c5e       e3db313c6dbc0                                                                                         4 minutes ago       Running             kube-scheduler            0                   2a191aae0ba26       kube-scheduler-multinode-421600
	
	
	==> coredns [7be33bccda15] <==
	[INFO] 10.244.0.3:60828 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000136608s
	[INFO] 10.244.1.2:39156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000192012s
	[INFO] 10.244.1.2:52508 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00016501s
	[INFO] 10.244.1.2:34502 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079905s
	[INFO] 10.244.1.2:38146 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136708s
	[INFO] 10.244.1.2:47439 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000133208s
	[INFO] 10.244.1.2:59021 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000056503s
	[INFO] 10.244.1.2:39203 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148509s
	[INFO] 10.244.1.2:58216 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071705s
	[INFO] 10.244.0.3:43754 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265615s
	[INFO] 10.244.0.3:60250 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111706s
	[INFO] 10.244.0.3:34465 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072904s
	[INFO] 10.244.0.3:43590 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143708s
	[INFO] 10.244.1.2:42897 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111607s
	[INFO] 10.244.1.2:33030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160909s
	[INFO] 10.244.1.2:33206 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068004s
	[INFO] 10.244.1.2:45851 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064203s
	[INFO] 10.244.0.3:34007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120507s
	[INFO] 10.244.0.3:52254 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114807s
	[INFO] 10.244.0.3:35961 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016441s
	[INFO] 10.244.0.3:47154 - 5 "PTR IN 1.48.26.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000110406s
	[INFO] 10.244.1.2:58408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092206s
	[INFO] 10.244.1.2:33917 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159809s
	[INFO] 10.244.1.2:35059 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111407s
	[INFO] 10.244.1.2:34636 - 5 "PTR IN 1.48.26.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000072604s
	
	
	==> describe nodes <==
	Name:               multinode-421600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-421600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=multinode-421600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T18_50_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 18:50:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-421600
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 18:55:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 18:54:43 +0000   Thu, 29 Feb 2024 18:50:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 18:54:43 +0000   Thu, 29 Feb 2024 18:50:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 18:54:43 +0000   Thu, 29 Feb 2024 18:50:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 18:54:43 +0000   Thu, 29 Feb 2024 18:51:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.62.28
	  Hostname:    multinode-421600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 87a5f5f7c8a64b86810958c5f0955f22
	  System UUID:                d3f22368-baf0-cc4c-80fb-62de8b17a3eb
	  Boot ID:                    0ea98b0d-3f90-4b42-b7a2-c4cbe21980b4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-4lvtb                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 coredns-5dd5756b68-5qhb2                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m15s
	  kube-system                 etcd-multinode-421600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m28s
	  kube-system                 kindnet-447dh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m15s
	  kube-system                 kube-apiserver-multinode-421600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-controller-manager-multinode-421600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-proxy-fpk6m                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-scheduler-multinode-421600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m13s  kube-proxy       
	  Normal  Starting                 4m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m28s  kubelet          Node multinode-421600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s  kubelet          Node multinode-421600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s  kubelet          Node multinode-421600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m16s  node-controller  Node multinode-421600 event: Registered Node multinode-421600 in Controller
	  Normal  NodeReady                4m3s   kubelet          Node multinode-421600 status is now: NodeReady
	
	
	Name:               multinode-421600-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-421600-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=multinode-421600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_29T18_53_36_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 18:53:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-421600-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 18:54:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 18:54:38 +0000   Thu, 29 Feb 2024 18:53:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 18:54:38 +0000   Thu, 29 Feb 2024 18:53:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 18:54:38 +0000   Thu, 29 Feb 2024 18:53:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 18:54:38 +0000   Thu, 29 Feb 2024 18:53:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.56.47
	  Hostname:    multinode-421600-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c892ca2186d442f9ff2d8b36bca7ee2
	  System UUID:                6a36fbf6-756c-e04e-acf4-cc2e8747fe39
	  Boot ID:                    bc6fc782-3a55-4602-8ca2-ca640e0dda1d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-dk9k8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kindnet-zblbg               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      90s
	  kube-system                 kube-proxy-7c7xc            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 81s                kube-proxy       
	  Normal  NodeHasSufficientMemory  90s (x5 over 91s)  kubelet          Node multinode-421600-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s (x5 over 91s)  kubelet          Node multinode-421600-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s (x5 over 91s)  kubelet          Node multinode-421600-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           86s                node-controller  Node multinode-421600-m02 event: Registered Node multinode-421600-m02 in Controller
	  Normal  NodeReady                71s                kubelet          Node multinode-421600-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.715049] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.740630] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +7.308539] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb29 18:49] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.169495] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[Feb29 18:50] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.089879] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.485339] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.185650] systemd-fstab-generator[953]: Ignoring "noauto" option for root device
	[  +0.210115] systemd-fstab-generator[967]: Ignoring "noauto" option for root device
	[  +1.729128] systemd-fstab-generator[1125]: Ignoring "noauto" option for root device
	[  +0.201398] systemd-fstab-generator[1137]: Ignoring "noauto" option for root device
	[  +0.191869] systemd-fstab-generator[1149]: Ignoring "noauto" option for root device
	[  +0.253620] systemd-fstab-generator[1164]: Ignoring "noauto" option for root device
	[ +12.641571] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	[  +0.103399] kauditd_printk_skb: 205 callbacks suppressed
	[  +8.856706] systemd-fstab-generator[1648]: Ignoring "noauto" option for root device
	[  +0.096593] kauditd_printk_skb: 51 callbacks suppressed
	[  +8.253729] systemd-fstab-generator[2586]: Ignoring "noauto" option for root device
	[  +0.122774] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.266071] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.200421] kauditd_printk_skb: 29 callbacks suppressed
	[Feb29 18:54] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [9245396d3b64] <==
	{"level":"info","ts":"2024-02-29T18:50:32.775984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2288677eaed97ae9 switched to configuration voters=(2488352587919227625)"}
	{"level":"info","ts":"2024-02-29T18:50:32.776284Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3ab6b21c82a909c4","local-member-id":"2288677eaed97ae9","added-peer-id":"2288677eaed97ae9","added-peer-peer-urls":["https://172.26.62.28:2380"]}
	{"level":"info","ts":"2024-02-29T18:50:32.778577Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T18:50:32.780809Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.26.62.28:2380"}
	{"level":"info","ts":"2024-02-29T18:50:32.781005Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.26.62.28:2380"}
	{"level":"info","ts":"2024-02-29T18:50:32.781352Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"2288677eaed97ae9","initial-advertise-peer-urls":["https://172.26.62.28:2380"],"listen-peer-urls":["https://172.26.62.28:2380"],"advertise-client-urls":["https://172.26.62.28:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.26.62.28:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T18:50:32.784374Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T18:50:33.593841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2288677eaed97ae9 is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-29T18:50:33.593962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2288677eaed97ae9 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-29T18:50:33.594077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2288677eaed97ae9 received MsgPreVoteResp from 2288677eaed97ae9 at term 1"}
	{"level":"info","ts":"2024-02-29T18:50:33.594154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2288677eaed97ae9 became candidate at term 2"}
	{"level":"info","ts":"2024-02-29T18:50:33.594179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2288677eaed97ae9 received MsgVoteResp from 2288677eaed97ae9 at term 2"}
	{"level":"info","ts":"2024-02-29T18:50:33.594279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2288677eaed97ae9 became leader at term 2"}
	{"level":"info","ts":"2024-02-29T18:50:33.594402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2288677eaed97ae9 elected leader 2288677eaed97ae9 at term 2"}
	{"level":"info","ts":"2024-02-29T18:50:33.596649Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:50:33.601005Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"2288677eaed97ae9","local-member-attributes":"{Name:multinode-421600 ClientURLs:[https://172.26.62.28:2379]}","request-path":"/0/members/2288677eaed97ae9/attributes","cluster-id":"3ab6b21c82a909c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T18:50:33.601122Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:50:33.602038Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3ab6b21c82a909c4","local-member-id":"2288677eaed97ae9","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:50:33.603875Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:50:33.604132Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T18:50:33.604385Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T18:50:33.605541Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.26.62.28:2379"}
	{"level":"info","ts":"2024-02-29T18:50:33.607865Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T18:50:33.608019Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-29T18:50:33.60918Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:55:07 up 6 min,  0 users,  load average: 0.40, 0.41, 0.20
	Linux multinode-421600 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [92f6a9511f4f] <==
	I0229 18:53:58.842737       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	I0229 18:54:08.852009       1 main.go:223] Handling node with IPs: map[172.26.62.28:{}]
	I0229 18:54:08.852330       1 main.go:227] handling current node
	I0229 18:54:08.852486       1 main.go:223] Handling node with IPs: map[172.26.56.47:{}]
	I0229 18:54:08.852570       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	I0229 18:54:18.864509       1 main.go:223] Handling node with IPs: map[172.26.62.28:{}]
	I0229 18:54:18.864624       1 main.go:227] handling current node
	I0229 18:54:18.864644       1 main.go:223] Handling node with IPs: map[172.26.56.47:{}]
	I0229 18:54:18.864657       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	I0229 18:54:28.871543       1 main.go:223] Handling node with IPs: map[172.26.62.28:{}]
	I0229 18:54:28.871649       1 main.go:227] handling current node
	I0229 18:54:28.871663       1 main.go:223] Handling node with IPs: map[172.26.56.47:{}]
	I0229 18:54:28.871670       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	I0229 18:54:38.880147       1 main.go:223] Handling node with IPs: map[172.26.62.28:{}]
	I0229 18:54:38.880169       1 main.go:227] handling current node
	I0229 18:54:38.880179       1 main.go:223] Handling node with IPs: map[172.26.56.47:{}]
	I0229 18:54:38.880185       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	I0229 18:54:48.892549       1 main.go:223] Handling node with IPs: map[172.26.62.28:{}]
	I0229 18:54:48.892652       1 main.go:227] handling current node
	I0229 18:54:48.892665       1 main.go:223] Handling node with IPs: map[172.26.56.47:{}]
	I0229 18:54:48.892673       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	I0229 18:54:58.899929       1 main.go:223] Handling node with IPs: map[172.26.62.28:{}]
	I0229 18:54:58.900051       1 main.go:227] handling current node
	I0229 18:54:58.900066       1 main.go:223] Handling node with IPs: map[172.26.56.47:{}]
	I0229 18:54:58.900074       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [ea0adcda4ba9] <==
	I0229 18:50:35.254330       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 18:50:35.254920       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 18:50:35.256287       1 aggregator.go:166] initial CRD sync complete...
	I0229 18:50:35.256592       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 18:50:35.256666       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 18:50:35.256742       1 cache.go:39] Caches are synced for autoregister controller
	I0229 18:50:35.258226       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 18:50:35.258505       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 18:50:35.275985       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 18:50:35.302861       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 18:50:36.061580       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0229 18:50:36.070628       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0229 18:50:36.070736       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0229 18:50:36.778145       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0229 18:50:36.849453       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0229 18:50:37.003553       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0229 18:50:37.016905       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.26.62.28]
	I0229 18:50:37.018073       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 18:50:37.025733       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0229 18:50:37.220554       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0229 18:50:38.508190       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0229 18:50:38.524478       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0229 18:50:38.544365       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0229 18:50:51.222891       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0229 18:50:51.312992       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [52fe82a87fa8] <==
	I0229 18:51:03.681039       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.602µs"
	I0229 18:51:03.717125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="125.702µs"
	I0229 18:51:05.518951       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0229 18:51:06.267469       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.113µs"
	I0229 18:51:07.286489       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.180622ms"
	I0229 18:51:07.287051       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.014µs"
	I0229 18:53:36.675529       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-421600-m02\" does not exist"
	I0229 18:53:36.691163       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-421600-m02" podCIDRs=["10.244.1.0/24"]
	I0229 18:53:36.707093       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zblbg"
	I0229 18:53:36.707125       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7c7xc"
	I0229 18:53:40.551829       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-421600-m02"
	I0229 18:53:40.551959       1 event.go:307] "Event occurred" object="multinode-421600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-421600-m02 event: Registered Node multinode-421600-m02 in Controller"
	I0229 18:53:55.012558       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-421600-m02"
	I0229 18:54:18.899034       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5b5d89c9d6 to 2"
	I0229 18:54:18.911894       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-dk9k8"
	I0229 18:54:18.919982       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-4lvtb"
	I0229 18:54:18.940725       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.401858ms"
	I0229 18:54:18.959262       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="16.29733ms"
	I0229 18:54:18.959352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="41.102µs"
	I0229 18:54:18.968585       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="74.004µs"
	I0229 18:54:18.969832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="85.404µs"
	I0229 18:54:20.840543       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="7.829566ms"
	I0229 18:54:20.841625       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.503µs"
	I0229 18:54:21.594353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="6.873507ms"
	I0229 18:54:21.594860       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.502µs"
	
	
	==> kube-proxy [2f8a25ce65da] <==
	I0229 18:50:53.074708       1 server_others.go:69] "Using iptables proxy"
	I0229 18:50:53.092341       1 node.go:141] Successfully retrieved node IP: 172.26.62.28
	I0229 18:50:53.146378       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 18:50:53.146404       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 18:50:53.149985       1 server_others.go:152] "Using iptables Proxier"
	I0229 18:50:53.150312       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 18:50:53.150825       1 server.go:846] "Version info" version="v1.28.4"
	I0229 18:50:53.150851       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:50:53.151682       1 config.go:188] "Starting service config controller"
	I0229 18:50:53.152018       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 18:50:53.152128       1 config.go:97] "Starting endpoint slice config controller"
	I0229 18:50:53.152136       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 18:50:53.153089       1 config.go:315] "Starting node config controller"
	I0229 18:50:53.153102       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 18:50:53.254073       1 shared_informer.go:318] Caches are synced for node config
	I0229 18:50:53.254154       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 18:50:53.254168       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [b8c8786727c5] <==
	W0229 18:50:35.255114       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 18:50:35.255175       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 18:50:35.255190       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 18:50:35.255198       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 18:50:36.212095       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 18:50:36.212964       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 18:50:36.212937       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 18:50:36.213254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 18:50:36.224082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 18:50:36.225483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 18:50:36.241435       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 18:50:36.241985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 18:50:36.295277       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 18:50:36.295305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 18:50:36.495754       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 18:50:36.496464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 18:50:36.536053       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 18:50:36.536397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 18:50:36.536343       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 18:50:36.536962       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 18:50:36.553109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 18:50:36.553299       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 18:50:36.560040       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 18:50:36.560240       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0229 18:50:39.441295       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 18:51:04 multinode-421600 kubelet[2607]: E0229 18:51:04.804389    2607 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb647b50-f478-4265-9ff1-b66190c46393-config-volume podName:cb647b50-f478-4265-9ff1-b66190c46393 nodeName:}" failed. No retries permitted until 2024-02-29 18:51:05.304299683 +0000 UTC m=+26.829176834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/cb647b50-f478-4265-9ff1-b66190c46393-config-volume") pod "coredns-5dd5756b68-5qhb2" (UID: "cb647b50-f478-4265-9ff1-b66190c46393") : failed to sync configmap cache: timed out waiting for the condition
	Feb 29 18:51:06 multinode-421600 kubelet[2607]: I0229 18:51:06.265648    2607 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5qhb2" podStartSLOduration=15.265607179 podCreationTimestamp="2024-02-29 18:50:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-29 18:51:06.265165116 +0000 UTC m=+27.790042267" watchObservedRunningTime="2024-02-29 18:51:06.265607179 +0000 UTC m=+27.790484230"
	Feb 29 18:51:06 multinode-421600 kubelet[2607]: I0229 18:51:06.265748    2607 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=8.265729796 podCreationTimestamp="2024-02-29 18:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-29 18:51:05.222051587 +0000 UTC m=+26.746928638" watchObservedRunningTime="2024-02-29 18:51:06.265729796 +0000 UTC m=+27.790606847"
	Feb 29 18:51:38 multinode-421600 kubelet[2607]: E0229 18:51:38.801004    2607 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 18:51:38 multinode-421600 kubelet[2607]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 18:51:38 multinode-421600 kubelet[2607]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 18:51:38 multinode-421600 kubelet[2607]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 18:51:38 multinode-421600 kubelet[2607]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 18:52:38 multinode-421600 kubelet[2607]: E0229 18:52:38.799767    2607 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 18:52:38 multinode-421600 kubelet[2607]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 18:52:38 multinode-421600 kubelet[2607]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 18:52:38 multinode-421600 kubelet[2607]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 18:52:38 multinode-421600 kubelet[2607]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 18:53:38 multinode-421600 kubelet[2607]: E0229 18:53:38.799501    2607 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 18:53:38 multinode-421600 kubelet[2607]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 18:53:38 multinode-421600 kubelet[2607]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 18:53:38 multinode-421600 kubelet[2607]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 18:53:38 multinode-421600 kubelet[2607]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 18:54:18 multinode-421600 kubelet[2607]: I0229 18:54:18.935725    2607 topology_manager.go:215] "Topology Admit Handler" podUID="797e17a3-3d6f-4cb7-8672-72171e528b0d" podNamespace="default" podName="busybox-5b5d89c9d6-4lvtb"
	Feb 29 18:54:18 multinode-421600 kubelet[2607]: I0229 18:54:18.988513    2607 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z8g8\" (UniqueName: \"kubernetes.io/projected/797e17a3-3d6f-4cb7-8672-72171e528b0d-kube-api-access-7z8g8\") pod \"busybox-5b5d89c9d6-4lvtb\" (UID: \"797e17a3-3d6f-4cb7-8672-72171e528b0d\") " pod="default/busybox-5b5d89c9d6-4lvtb"
	Feb 29 18:54:38 multinode-421600 kubelet[2607]: E0229 18:54:38.798884    2607 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 18:54:38 multinode-421600 kubelet[2607]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 18:54:38 multinode-421600 kubelet[2607]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 18:54:38 multinode-421600 kubelet[2607]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 18:54:38 multinode-421600 kubelet[2607]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:54:59.614985    7416 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-421600 -n multinode-421600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-421600 -n multinode-421600: (10.9425548s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-421600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (53.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (522.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-421600
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-421600
multinode_test.go:318: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-421600: (1m21.6747177s)
multinode_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-421600 --wait=true -v=8 --alsologtostderr
E0229 19:10:23.234885    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 19:10:31.919483    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 19:13:35.128356    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 19:15:23.253721    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 19:15:31.937786    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
multinode_test.go:323: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-421600 --wait=true -v=8 --alsologtostderr: (6m47.9327678s)
multinode_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-421600
multinode_test.go:335: reported node list is not the same after restart. Before restart: multinode-421600	172.26.62.28
multinode-421600-m02	172.26.56.47
multinode-421600-m03	172.26.50.77

                                                
                                                
After restart: multinode-421600	172.26.52.109
multinode-421600-m02	172.26.62.204
multinode-421600-m03	172.26.59.9
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-421600 -n multinode-421600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-421600 -n multinode-421600: (11.1959801s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 logs -n 25: (8.2747514s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-421600 ssh -n                                                                                                  | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:01 UTC | 29 Feb 24 19:01 UTC |
	|         | multinode-421600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-421600 cp multinode-421600-m02:/home/docker/cp-test.txt                                                        | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:01 UTC | 29 Feb 24 19:01 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1252078008\001\cp-test_multinode-421600-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-421600 ssh -n                                                                                                  | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:01 UTC | 29 Feb 24 19:01 UTC |
	|         | multinode-421600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-421600 cp multinode-421600-m02:/home/docker/cp-test.txt                                                        | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:01 UTC | 29 Feb 24 19:01 UTC |
	|         | multinode-421600:/home/docker/cp-test_multinode-421600-m02_multinode-421600.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-421600 ssh -n                                                                                                  | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:01 UTC | 29 Feb 24 19:02 UTC |
	|         | multinode-421600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-421600 ssh -n multinode-421600 sudo cat                                                                        | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:02 UTC | 29 Feb 24 19:02 UTC |
	|         | /home/docker/cp-test_multinode-421600-m02_multinode-421600.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-421600 cp multinode-421600-m02:/home/docker/cp-test.txt                                                        | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:02 UTC | 29 Feb 24 19:02 UTC |
	|         | multinode-421600-m03:/home/docker/cp-test_multinode-421600-m02_multinode-421600-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-421600 ssh -n                                                                                                  | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:02 UTC | 29 Feb 24 19:02 UTC |
	|         | multinode-421600-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-421600 ssh -n multinode-421600-m03 sudo cat                                                                    | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:02 UTC | 29 Feb 24 19:02 UTC |
	|         | /home/docker/cp-test_multinode-421600-m02_multinode-421600-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-421600 cp testdata\cp-test.txt                                                                                 | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:02 UTC | 29 Feb 24 19:02 UTC |
	|         | multinode-421600-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-421600 ssh -n                                                                                                  | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:02 UTC | 29 Feb 24 19:02 UTC |
	|         | multinode-421600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-421600 cp multinode-421600-m03:/home/docker/cp-test.txt                                                        | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:02 UTC | 29 Feb 24 19:03 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1252078008\001\cp-test_multinode-421600-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-421600 ssh -n                                                                                                  | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:03 UTC | 29 Feb 24 19:03 UTC |
	|         | multinode-421600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-421600 cp multinode-421600-m03:/home/docker/cp-test.txt                                                        | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:03 UTC | 29 Feb 24 19:03 UTC |
	|         | multinode-421600:/home/docker/cp-test_multinode-421600-m03_multinode-421600.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-421600 ssh -n                                                                                                  | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:03 UTC | 29 Feb 24 19:03 UTC |
	|         | multinode-421600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-421600 ssh -n multinode-421600 sudo cat                                                                        | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:03 UTC | 29 Feb 24 19:03 UTC |
	|         | /home/docker/cp-test_multinode-421600-m03_multinode-421600.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-421600 cp multinode-421600-m03:/home/docker/cp-test.txt                                                        | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:03 UTC | 29 Feb 24 19:04 UTC |
	|         | multinode-421600-m02:/home/docker/cp-test_multinode-421600-m03_multinode-421600-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-421600 ssh -n                                                                                                  | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:04 UTC | 29 Feb 24 19:04 UTC |
	|         | multinode-421600-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-421600 ssh -n multinode-421600-m02 sudo cat                                                                    | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:04 UTC | 29 Feb 24 19:04 UTC |
	|         | /home/docker/cp-test_multinode-421600-m03_multinode-421600-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-421600 node stop m03                                                                                           | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:04 UTC | 29 Feb 24 19:04 UTC |
	| node    | multinode-421600 node start                                                                                              | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:05 UTC | 29 Feb 24 19:07 UTC |
	|         | m03 --alsologtostderr                                                                                                    |                  |                   |         |                     |                     |
	| node    | list -p multinode-421600                                                                                                 | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:07 UTC |                     |
	| stop    | -p multinode-421600                                                                                                      | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:07 UTC | 29 Feb 24 19:09 UTC |
	| start   | -p multinode-421600                                                                                                      | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:09 UTC | 29 Feb 24 19:16 UTC |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	| node    | list -p multinode-421600                                                                                                 | multinode-421600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:16 UTC |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 19:09:15
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 19:09:15.779991    6464 out.go:291] Setting OutFile to fd 1448 ...
	I0229 19:09:15.780207    6464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:09:15.780207    6464 out.go:304] Setting ErrFile to fd 1496...
	I0229 19:09:15.780207    6464 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:09:15.797726    6464 out.go:298] Setting JSON to false
	I0229 19:09:15.800724    6464 start.go:129] hostinfo: {"hostname":"minikube5","uptime":55492,"bootTime":1709178263,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 19:09:15.800724    6464 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 19:09:15.801722    6464 out.go:177] * [multinode-421600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 19:09:15.802725    6464 notify.go:220] Checking for updates...
	I0229 19:09:15.802725    6464 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:09:15.803724    6464 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 19:09:15.804718    6464 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 19:09:15.804718    6464 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 19:09:15.805713    6464 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 19:09:15.806725    6464 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:09:15.806725    6464 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 19:09:20.776251    6464 out.go:177] * Using the hyperv driver based on existing profile
	I0229 19:09:20.777144    6464 start.go:299] selected driver: hyperv
	I0229 19:09:20.777242    6464 start.go:903] validating driver "hyperv" against &{Name:multinode-421600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-421600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.26.62.28 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.56.47 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.26.50.77 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:09:20.777403    6464 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 19:09:20.825397    6464 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 19:09:20.825397    6464 cni.go:84] Creating CNI manager for ""
	I0229 19:09:20.825880    6464 cni.go:136] 3 nodes found, recommending kindnet
	I0229 19:09:20.825880    6464 start_flags.go:323] config:
	{Name:multinode-421600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-421600 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.26.62.28 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.56.47 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.26.50.77 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-
provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:09:20.826499    6464 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:09:20.827643    6464 out.go:177] * Starting control plane node multinode-421600 in cluster multinode-421600
	I0229 19:09:20.828363    6464 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 19:09:20.828621    6464 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 19:09:20.828695    6464 cache.go:56] Caching tarball of preloaded images
	I0229 19:09:20.829070    6464 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 19:09:20.829070    6464 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 19:09:20.829070    6464 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\config.json ...
	I0229 19:09:20.831522    6464 start.go:365] acquiring machines lock for multinode-421600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 19:09:20.831603    6464 start.go:369] acquired machines lock for "multinode-421600" in 41.9µs
	I0229 19:09:20.831603    6464 start.go:96] Skipping create...Using existing machine configuration
	I0229 19:09:20.831603    6464 fix.go:54] fixHost starting: 
	I0229 19:09:20.832201    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:09:23.440536    6464 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 19:09:23.440536    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:23.440536    6464 fix.go:102] recreateIfNeeded on multinode-421600: state=Stopped err=<nil>
	W0229 19:09:23.440536    6464 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 19:09:23.441380    6464 out.go:177] * Restarting existing hyperv VM for "multinode-421600" ...
	I0229 19:09:23.442135    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-421600
	I0229 19:09:26.150072    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:09:26.150144    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:26.150144    6464 main.go:141] libmachine: Waiting for host to start...
	I0229 19:09:26.150144    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:09:28.209797    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:09:28.209999    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:28.209999    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:09:30.520399    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:09:30.520399    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:31.530275    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:09:33.569949    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:09:33.570808    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:33.570897    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:09:35.887488    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:09:35.888468    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:36.900220    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:09:38.918688    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:09:38.918737    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:38.918843    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:09:41.229134    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:09:41.230129    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:42.233973    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:09:44.243374    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:09:44.243374    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:44.243658    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:09:46.532342    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:09:46.532580    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:47.542432    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:09:49.555153    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:09:49.555320    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:49.555320    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:09:51.944513    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:09:51.945038    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:51.947297    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:09:53.931831    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:09:53.931831    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:53.932442    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:09:56.327492    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:09:56.327764    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:56.327764    6464 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\config.json ...
	I0229 19:09:56.329375    6464 machine.go:88] provisioning docker machine ...
	I0229 19:09:56.329375    6464 buildroot.go:166] provisioning hostname "multinode-421600"
	I0229 19:09:56.329961    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:09:58.322166    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:09:58.322166    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:09:58.323054    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:10:00.700014    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:10:00.700014    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:00.704290    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:10:00.704482    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.52.109 22 <nil> <nil>}
	I0229 19:10:00.704482    6464 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-421600 && echo "multinode-421600" | sudo tee /etc/hostname
	I0229 19:10:00.867164    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-421600
	
	I0229 19:10:00.867164    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:10:02.834869    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:10:02.834869    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:02.834869    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:10:05.193844    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:10:05.193989    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:05.197894    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:10:05.198502    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.52.109 22 <nil> <nil>}
	I0229 19:10:05.198502    6464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-421600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-421600/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-421600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 19:10:05.361662    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:10:05.361754    6464 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 19:10:05.361754    6464 buildroot.go:174] setting up certificates
	I0229 19:10:05.361873    6464 provision.go:83] configureAuth start
	I0229 19:10:05.361873    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:10:07.329068    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:10:07.329068    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:07.329179    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:10:09.718465    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:10:09.718465    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:09.719539    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:10:11.715659    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:10:11.715783    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:11.715783    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:10:14.109129    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:10:14.110004    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:14.110080    6464 provision.go:138] copyHostCerts
	I0229 19:10:14.110080    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 19:10:14.110080    6464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 19:10:14.110080    6464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 19:10:14.110771    6464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 19:10:14.111702    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 19:10:14.111888    6464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 19:10:14.111888    6464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 19:10:14.111888    6464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 19:10:14.112477    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 19:10:14.113172    6464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 19:10:14.113172    6464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 19:10:14.113172    6464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 19:10:14.113833    6464 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-421600 san=[172.26.52.109 172.26.52.109 localhost 127.0.0.1 minikube multinode-421600]
	I0229 19:10:14.218549    6464 provision.go:172] copyRemoteCerts
	I0229 19:10:14.228967    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 19:10:14.228967    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:10:16.196044    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:10:16.196044    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:16.196105    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:10:18.583511    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:10:18.583511    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:18.584258    6464 sshutil.go:53] new ssh client: &{IP:172.26.52.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 19:10:18.694301    6464 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4650864s)
	I0229 19:10:18.694417    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 19:10:18.694734    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 19:10:18.745620    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 19:10:18.745970    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0229 19:10:18.796413    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 19:10:18.796736    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 19:10:18.843744    6464 provision.go:86] duration metric: configureAuth took 13.4811222s
	I0229 19:10:18.843744    6464 buildroot.go:189] setting minikube options for container-runtime
	I0229 19:10:18.844462    6464 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:10:18.844565    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:10:20.846381    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:10:20.846381    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:20.846876    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:10:23.241283    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:10:23.241356    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:23.245400    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:10:23.245924    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.52.109 22 <nil> <nil>}
	I0229 19:10:23.245924    6464 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 19:10:23.387004    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 19:10:23.387063    6464 buildroot.go:70] root file system type: tmpfs
	I0229 19:10:23.387329    6464 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 19:10:23.387412    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:10:25.358142    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:10:25.358142    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:25.359185    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:10:27.723120    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:10:27.723209    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:27.729072    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:10:27.729758    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.52.109 22 <nil> <nil>}
	I0229 19:10:27.729758    6464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 19:10:27.888564    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 19:10:27.888762    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:10:29.834013    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:10:29.834013    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:29.834671    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:10:32.206432    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:10:32.206432    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:32.210137    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:10:32.210731    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.52.109 22 <nil> <nil>}
	I0229 19:10:32.210731    6464 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 19:10:33.564797    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 19:10:33.564797    6464 machine.go:91] provisioned docker machine in 37.2333554s
	I0229 19:10:33.564797    6464 start.go:300] post-start starting for "multinode-421600" (driver="hyperv")
	I0229 19:10:33.564797    6464 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 19:10:33.574354    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 19:10:33.574354    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:10:35.590935    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:10:35.591982    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:35.592049    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:10:37.996605    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:10:37.996605    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:37.996951    6464 sshutil.go:53] new ssh client: &{IP:172.26.52.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 19:10:38.113451    6464 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5388451s)
	I0229 19:10:38.122560    6464 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 19:10:38.130083    6464 command_runner.go:130] > NAME=Buildroot
	I0229 19:10:38.130158    6464 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 19:10:38.130158    6464 command_runner.go:130] > ID=buildroot
	I0229 19:10:38.130158    6464 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 19:10:38.130158    6464 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 19:10:38.130238    6464 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 19:10:38.130283    6464 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 19:10:38.130608    6464 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 19:10:38.130812    6464 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> 43562.pem in /etc/ssl/certs
	I0229 19:10:38.130812    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> /etc/ssl/certs/43562.pem
	I0229 19:10:38.140300    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 19:10:38.158834    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /etc/ssl/certs/43562.pem (1708 bytes)
	I0229 19:10:38.205014    6464 start.go:303] post-start completed in 4.6399592s
	I0229 19:10:38.205087    6464 fix.go:56] fixHost completed within 1m17.3691893s
	I0229 19:10:38.205139    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:10:40.215542    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:10:40.215542    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:40.216045    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:10:42.604325    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:10:42.604325    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:42.609028    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:10:42.609631    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.52.109 22 <nil> <nil>}
	I0229 19:10:42.609744    6464 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 19:10:42.748541    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233842.910900906
	
	I0229 19:10:42.748541    6464 fix.go:206] guest clock: 1709233842.910900906
	I0229 19:10:42.748541    6464 fix.go:219] Guest: 2024-02-29 19:10:42.910900906 +0000 UTC Remote: 2024-02-29 19:10:38.2050873 +0000 UTC m=+82.565665801 (delta=4.705813606s)
	I0229 19:10:42.748662    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:10:44.731895    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:10:44.731954    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:44.731954    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:10:47.095023    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:10:47.095774    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:47.101001    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:10:47.101604    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.52.109 22 <nil> <nil>}
	I0229 19:10:47.101604    6464 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709233842
	I0229 19:10:47.254065    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 19:10:42 UTC 2024
	
	I0229 19:10:47.254065    6464 fix.go:226] clock set: Thu Feb 29 19:10:42 UTC 2024
	 (err=<nil>)
	I0229 19:10:47.254065    6464 start.go:83] releasing machines lock for "multinode-421600", held for 1m26.4176648s
	I0229 19:10:47.254065    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:10:49.281155    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:10:49.281565    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:49.281565    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:10:51.671912    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:10:51.671912    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:51.676248    6464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 19:10:51.676382    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:10:51.682769    6464 ssh_runner.go:195] Run: cat /version.json
	I0229 19:10:51.682769    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:10:53.657268    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:10:53.657268    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:53.657268    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:10:53.657268    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:10:53.657268    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:53.657870    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:10:56.092273    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:10:56.092573    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:56.092628    6464 sshutil.go:53] new ssh client: &{IP:172.26.52.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 19:10:56.119380    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:10:56.119380    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:10:56.119975    6464 sshutil.go:53] new ssh client: &{IP:172.26.52.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 19:10:56.250891    6464 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 19:10:56.250891    6464 command_runner.go:130] > {"iso_version": "v1.32.1-1708638130-18020", "kicbase_version": "v0.0.42-1708008208-17936", "minikube_version": "v1.32.0", "commit": "d80143d2abd5a004b09b48bbc118a104326900af"}
	I0229 19:10:56.250891    6464 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5743234s)
	I0229 19:10:56.250891    6464 ssh_runner.go:235] Completed: cat /version.json: (4.5678679s)
	I0229 19:10:56.260915    6464 ssh_runner.go:195] Run: systemctl --version
	I0229 19:10:56.270340    6464 command_runner.go:130] > systemd 252 (252)
	I0229 19:10:56.270340    6464 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0229 19:10:56.279892    6464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 19:10:56.287861    6464 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0229 19:10:56.289077    6464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 19:10:56.298879    6464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 19:10:56.327960    6464 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 19:10:56.328325    6464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 19:10:56.328535    6464 start.go:475] detecting cgroup driver to use...
	I0229 19:10:56.328573    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:10:56.361899    6464 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 19:10:56.371351    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 19:10:56.402412    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 19:10:56.421485    6464 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 19:10:56.430495    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 19:10:56.459654    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 19:10:56.488649    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 19:10:56.521121    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 19:10:56.550112    6464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 19:10:56.578680    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 19:10:56.606749    6464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 19:10:56.624969    6464 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 19:10:56.635527    6464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 19:10:56.661529    6464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:10:56.858944    6464 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 19:10:56.887996    6464 start.go:475] detecting cgroup driver to use...
	I0229 19:10:56.897720    6464 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 19:10:56.918697    6464 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 19:10:56.918697    6464 command_runner.go:130] > [Unit]
	I0229 19:10:56.918697    6464 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 19:10:56.918697    6464 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 19:10:56.918697    6464 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 19:10:56.918697    6464 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 19:10:56.918697    6464 command_runner.go:130] > StartLimitBurst=3
	I0229 19:10:56.918697    6464 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 19:10:56.918697    6464 command_runner.go:130] > [Service]
	I0229 19:10:56.918697    6464 command_runner.go:130] > Type=notify
	I0229 19:10:56.918697    6464 command_runner.go:130] > Restart=on-failure
	I0229 19:10:56.918697    6464 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 19:10:56.918697    6464 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 19:10:56.918697    6464 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 19:10:56.918697    6464 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 19:10:56.918697    6464 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 19:10:56.918697    6464 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 19:10:56.918697    6464 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 19:10:56.918697    6464 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 19:10:56.918697    6464 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 19:10:56.918697    6464 command_runner.go:130] > ExecStart=
	I0229 19:10:56.918697    6464 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 19:10:56.918697    6464 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 19:10:56.918697    6464 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 19:10:56.918697    6464 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 19:10:56.918697    6464 command_runner.go:130] > LimitNOFILE=infinity
	I0229 19:10:56.918697    6464 command_runner.go:130] > LimitNPROC=infinity
	I0229 19:10:56.918697    6464 command_runner.go:130] > LimitCORE=infinity
	I0229 19:10:56.918697    6464 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 19:10:56.918697    6464 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 19:10:56.918697    6464 command_runner.go:130] > TasksMax=infinity
	I0229 19:10:56.918697    6464 command_runner.go:130] > TimeoutStartSec=0
	I0229 19:10:56.918697    6464 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 19:10:56.918697    6464 command_runner.go:130] > Delegate=yes
	I0229 19:10:56.918697    6464 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 19:10:56.918697    6464 command_runner.go:130] > KillMode=process
	I0229 19:10:56.918697    6464 command_runner.go:130] > [Install]
	I0229 19:10:56.918697    6464 command_runner.go:130] > WantedBy=multi-user.target
	I0229 19:10:56.926705    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 19:10:56.957005    6464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 19:10:56.990820    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 19:10:57.023818    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 19:10:57.054820    6464 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 19:10:57.101846    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 19:10:57.124920    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:10:57.158371    6464 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 19:10:57.166965    6464 ssh_runner.go:195] Run: which cri-dockerd
	I0229 19:10:57.171968    6464 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 19:10:57.182184    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 19:10:57.199520    6464 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 19:10:57.238356    6464 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 19:10:57.424241    6464 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 19:10:57.617126    6464 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 19:10:57.617406    6464 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 19:10:57.658886    6464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:10:57.844776    6464 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 19:10:59.469182    6464 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6233699s)
	I0229 19:10:59.481010    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 19:10:59.515977    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 19:10:59.550828    6464 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 19:10:59.742461    6464 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 19:10:59.939549    6464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:11:00.127199    6464 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 19:11:00.165918    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 19:11:00.198110    6464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:11:00.389627    6464 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 19:11:00.484305    6464 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 19:11:00.494037    6464 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 19:11:00.502240    6464 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 19:11:00.502273    6464 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 19:11:00.502273    6464 command_runner.go:130] > Device: 0,22	Inode: 852         Links: 1
	I0229 19:11:00.502333    6464 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 19:11:00.502333    6464 command_runner.go:130] > Access: 2024-02-29 19:11:00.584294421 +0000
	I0229 19:11:00.502365    6464 command_runner.go:130] > Modify: 2024-02-29 19:11:00.584294421 +0000
	I0229 19:11:00.502365    6464 command_runner.go:130] > Change: 2024-02-29 19:11:00.588295213 +0000
	I0229 19:11:00.502365    6464 command_runner.go:130] >  Birth: -
	I0229 19:11:00.502405    6464 start.go:543] Will wait 60s for crictl version
	I0229 19:11:00.511807    6464 ssh_runner.go:195] Run: which crictl
	I0229 19:11:00.517659    6464 command_runner.go:130] > /usr/bin/crictl
	I0229 19:11:00.526677    6464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 19:11:00.595363    6464 command_runner.go:130] > Version:  0.1.0
	I0229 19:11:00.595640    6464 command_runner.go:130] > RuntimeName:  docker
	I0229 19:11:00.595640    6464 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 19:11:00.595640    6464 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 19:11:00.596076    6464 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 19:11:00.603122    6464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 19:11:00.633812    6464 command_runner.go:130] > 24.0.7
	I0229 19:11:00.644989    6464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 19:11:00.675526    6464 command_runner.go:130] > 24.0.7
	I0229 19:11:00.677551    6464 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 19:11:00.677710    6464 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 19:11:00.681969    6464 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 19:11:00.682151    6464 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 19:11:00.682186    6464 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 19:11:00.682186    6464 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:76:3f:19 Flags:up|broadcast|multicast|running}
	I0229 19:11:00.684927    6464 ip.go:210] interface addr: fe80::841a:4367:8c9:abc/64
	I0229 19:11:00.684927    6464 ip.go:210] interface addr: 172.26.48.1/20
	I0229 19:11:00.693920    6464 ssh_runner.go:195] Run: grep 172.26.48.1	host.minikube.internal$ /etc/hosts
	I0229 19:11:00.700142    6464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:11:00.720415    6464 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 19:11:00.727725    6464 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 19:11:00.750837    6464 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0229 19:11:00.751332    6464 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 19:11:00.751332    6464 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 19:11:00.751375    6464 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 19:11:00.751375    6464 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 19:11:00.751422    6464 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 19:11:00.751422    6464 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 19:11:00.751456    6464 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 19:11:00.751456    6464 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:11:00.751456    6464 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0229 19:11:00.752342    6464 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0229 19:11:00.752342    6464 docker.go:615] Images already preloaded, skipping extraction
	I0229 19:11:00.758329    6464 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 19:11:00.782482    6464 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0229 19:11:00.782482    6464 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0229 19:11:00.782482    6464 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 19:11:00.782482    6464 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0229 19:11:00.782482    6464 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0229 19:11:00.782482    6464 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0229 19:11:00.782482    6464 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0229 19:11:00.782482    6464 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0229 19:11:00.782482    6464 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:11:00.782482    6464 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0229 19:11:00.782482    6464 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0229 19:11:00.782482    6464 cache_images.go:84] Images are preloaded, skipping loading
	I0229 19:11:00.793563    6464 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 19:11:00.832750    6464 command_runner.go:130] > cgroupfs
	I0229 19:11:00.833626    6464 cni.go:84] Creating CNI manager for ""
	I0229 19:11:00.833650    6464 cni.go:136] 3 nodes found, recommending kindnet
	I0229 19:11:00.833650    6464 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 19:11:00.833650    6464 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.52.109 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-421600 NodeName:multinode-421600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.52.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.52.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 19:11:00.833650    6464 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.52.109
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-421600"
	  kubeletExtraArgs:
	    node-ip: 172.26.52.109
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.52.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 19:11:00.834176    6464 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-421600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.52.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-421600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 19:11:00.843000    6464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 19:11:00.861240    6464 command_runner.go:130] > kubeadm
	I0229 19:11:00.861296    6464 command_runner.go:130] > kubectl
	I0229 19:11:00.861296    6464 command_runner.go:130] > kubelet
	I0229 19:11:00.861347    6464 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 19:11:00.870130    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 19:11:00.886134    6464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0229 19:11:00.917717    6464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 19:11:00.952857    6464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0229 19:11:00.994974    6464 ssh_runner.go:195] Run: grep 172.26.52.109	control-plane.minikube.internal$ /etc/hosts
	I0229 19:11:01.000977    6464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.52.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:11:01.021218    6464 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600 for IP: 172.26.52.109
	I0229 19:11:01.021218    6464 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:11:01.021845    6464 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 19:11:01.022115    6464 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 19:11:01.023052    6464 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\client.key
	I0229 19:11:01.023219    6464 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.key.10087114
	I0229 19:11:01.023482    6464 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.crt.10087114 with IP's: [172.26.52.109 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 19:11:01.534092    6464 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.crt.10087114 ...
	I0229 19:11:01.534092    6464 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.crt.10087114: {Name:mkaebb1fe3de1edf28208e4c19a8794e3789d7dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:11:01.535101    6464 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.key.10087114 ...
	I0229 19:11:01.535101    6464 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.key.10087114: {Name:mk5fcc8d2f8791cbad0e22fee02a0f4dd26639ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:11:01.536177    6464 certs.go:337] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.crt.10087114 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.crt
	I0229 19:11:01.549691    6464 certs.go:341] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.key.10087114 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.key
	I0229 19:11:01.550481    6464 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.key
	I0229 19:11:01.550481    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0229 19:11:01.550810    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0229 19:11:01.551482    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0229 19:11:01.551615    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0229 19:11:01.551615    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 19:11:01.551615    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 19:11:01.551615    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 19:11:01.551615    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 19:11:01.552224    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem (1338 bytes)
	W0229 19:11:01.552224    6464 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356_empty.pem, impossibly tiny 0 bytes
	I0229 19:11:01.552224    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 19:11:01.552224    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 19:11:01.552821    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 19:11:01.552821    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 19:11:01.552821    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem (1708 bytes)
	I0229 19:11:01.553420    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem -> /usr/share/ca-certificates/4356.pem
	I0229 19:11:01.553420    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> /usr/share/ca-certificates/43562.pem
	I0229 19:11:01.553420    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:11:01.554583    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 19:11:01.600437    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0229 19:11:01.651938    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 19:11:01.698474    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 19:11:01.743937    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 19:11:01.791621    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 19:11:01.837644    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 19:11:01.883215    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 19:11:01.929136    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem --> /usr/share/ca-certificates/4356.pem (1338 bytes)
	I0229 19:11:01.973549    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /usr/share/ca-certificates/43562.pem (1708 bytes)
	I0229 19:11:02.019800    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 19:11:02.063041    6464 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 19:11:02.108466    6464 ssh_runner.go:195] Run: openssl version
	I0229 19:11:02.117255    6464 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 19:11:02.126736    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43562.pem && ln -fs /usr/share/ca-certificates/43562.pem /etc/ssl/certs/43562.pem"
	I0229 19:11:02.156573    6464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43562.pem
	I0229 19:11:02.164133    6464 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 17:55 /usr/share/ca-certificates/43562.pem
	I0229 19:11:02.164310    6464 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:55 /usr/share/ca-certificates/43562.pem
	I0229 19:11:02.177125    6464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43562.pem
	I0229 19:11:02.185905    6464 command_runner.go:130] > 3ec20f2e
	I0229 19:11:02.195960    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43562.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 19:11:02.223615    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 19:11:02.251034    6464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:11:02.259381    6464 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:11:02.259786    6464 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:11:02.269862    6464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:11:02.282270    6464 command_runner.go:130] > b5213941
	I0229 19:11:02.294004    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 19:11:02.324784    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4356.pem && ln -fs /usr/share/ca-certificates/4356.pem /etc/ssl/certs/4356.pem"
	I0229 19:11:02.352400    6464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4356.pem
	I0229 19:11:02.359812    6464 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 17:55 /usr/share/ca-certificates/4356.pem
	I0229 19:11:02.359812    6464 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:55 /usr/share/ca-certificates/4356.pem
	I0229 19:11:02.371950    6464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4356.pem
	I0229 19:11:02.381035    6464 command_runner.go:130] > 51391683
	I0229 19:11:02.390017    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4356.pem /etc/ssl/certs/51391683.0"
	I0229 19:11:02.418533    6464 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 19:11:02.425505    6464 command_runner.go:130] > ca.crt
	I0229 19:11:02.425505    6464 command_runner.go:130] > ca.key
	I0229 19:11:02.425505    6464 command_runner.go:130] > healthcheck-client.crt
	I0229 19:11:02.425505    6464 command_runner.go:130] > healthcheck-client.key
	I0229 19:11:02.425505    6464 command_runner.go:130] > peer.crt
	I0229 19:11:02.425505    6464 command_runner.go:130] > peer.key
	I0229 19:11:02.425505    6464 command_runner.go:130] > server.crt
	I0229 19:11:02.425505    6464 command_runner.go:130] > server.key
	I0229 19:11:02.433965    6464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0229 19:11:02.442973    6464 command_runner.go:130] > Certificate will not expire
	I0229 19:11:02.451476    6464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0229 19:11:02.460595    6464 command_runner.go:130] > Certificate will not expire
	I0229 19:11:02.469906    6464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0229 19:11:02.478775    6464 command_runner.go:130] > Certificate will not expire
	I0229 19:11:02.487861    6464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0229 19:11:02.496822    6464 command_runner.go:130] > Certificate will not expire
	I0229 19:11:02.506998    6464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0229 19:11:02.517013    6464 command_runner.go:130] > Certificate will not expire
	I0229 19:11:02.526533    6464 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0229 19:11:02.536306    6464 command_runner.go:130] > Certificate will not expire
	I0229 19:11:02.536638    6464 kubeadm.go:404] StartCluster: {Name:multinode-421600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-421600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.26.52.109 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.56.47 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.26.50.77 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingres
s:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:11:02.543371    6464 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 19:11:02.578314    6464 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 19:11:02.597009    6464 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0229 19:11:02.597009    6464 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0229 19:11:02.597009    6464 command_runner.go:130] > /var/lib/minikube/etcd:
	I0229 19:11:02.597009    6464 command_runner.go:130] > member
	I0229 19:11:02.598013    6464 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0229 19:11:02.598013    6464 kubeadm.go:636] restartCluster start
	I0229 19:11:02.608010    6464 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0229 19:11:02.625893    6464 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:11:02.626420    6464 kubeconfig.go:135] verify returned: extract IP: "multinode-421600" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:11:02.627021    6464 kubeconfig.go:146] "multinode-421600" context is missing from C:\Users\jenkins.minikube5\minikube-integration\kubeconfig - will repair!
	I0229 19:11:02.627021    6464 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:11:02.641593    6464 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:11:02.642202    6464 kapi.go:59] client config for multinode-421600: &rest.Config{Host:"https://172.26.52.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 19:11:02.642795    6464 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 19:11:02.651852    6464 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0229 19:11:02.669440    6464 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0229 19:11:02.669440    6464 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0229 19:11:02.669440    6464 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0229 19:11:02.669440    6464 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0229 19:11:02.669440    6464 command_runner.go:130] >  kind: InitConfiguration
	I0229 19:11:02.669440    6464 command_runner.go:130] >  localAPIEndpoint:
	I0229 19:11:02.669440    6464 command_runner.go:130] > -  advertiseAddress: 172.26.62.28
	I0229 19:11:02.669440    6464 command_runner.go:130] > +  advertiseAddress: 172.26.52.109
	I0229 19:11:02.669440    6464 command_runner.go:130] >    bindPort: 8443
	I0229 19:11:02.669440    6464 command_runner.go:130] >  bootstrapTokens:
	I0229 19:11:02.669440    6464 command_runner.go:130] >    - groups:
	I0229 19:11:02.669440    6464 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0229 19:11:02.669440    6464 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0229 19:11:02.669440    6464 command_runner.go:130] >    name: "multinode-421600"
	I0229 19:11:02.669440    6464 command_runner.go:130] >    kubeletExtraArgs:
	I0229 19:11:02.669440    6464 command_runner.go:130] > -    node-ip: 172.26.62.28
	I0229 19:11:02.669440    6464 command_runner.go:130] > +    node-ip: 172.26.52.109
	I0229 19:11:02.669440    6464 command_runner.go:130] >    taints: []
	I0229 19:11:02.669440    6464 command_runner.go:130] >  ---
	I0229 19:11:02.669440    6464 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0229 19:11:02.669440    6464 command_runner.go:130] >  kind: ClusterConfiguration
	I0229 19:11:02.669440    6464 command_runner.go:130] >  apiServer:
	I0229 19:11:02.669440    6464 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.26.62.28"]
	I0229 19:11:02.669440    6464 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.26.52.109"]
	I0229 19:11:02.669440    6464 command_runner.go:130] >    extraArgs:
	I0229 19:11:02.669440    6464 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0229 19:11:02.669440    6464 command_runner.go:130] >  controllerManager:
	I0229 19:11:02.670421    6464 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.26.62.28
	+  advertiseAddress: 172.26.52.109
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-421600"
	   kubeletExtraArgs:
	-    node-ip: 172.26.62.28
	+    node-ip: 172.26.52.109
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.26.62.28"]
	+  certSANs: ["127.0.0.1", "localhost", "172.26.52.109"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0229 19:11:02.670421    6464 kubeadm.go:1135] stopping kube-system containers ...
	I0229 19:11:02.677416    6464 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 19:11:02.704423    6464 command_runner.go:130] > 7be33bccda15
	I0229 19:11:02.704423    6464 command_runner.go:130] > f4d0b06ecf4a
	I0229 19:11:02.704423    6464 command_runner.go:130] > 8f42b1a35229
	I0229 19:11:02.704423    6464 command_runner.go:130] > f53a12cbddd5
	I0229 19:11:02.704423    6464 command_runner.go:130] > 92f6a9511f4f
	I0229 19:11:02.704423    6464 command_runner.go:130] > 2f8a25ce65da
	I0229 19:11:02.704423    6464 command_runner.go:130] > 39324e665418
	I0229 19:11:02.704423    6464 command_runner.go:130] > 779c3df146b2
	I0229 19:11:02.704423    6464 command_runner.go:130] > 9245396d3b64
	I0229 19:11:02.704423    6464 command_runner.go:130] > ea0adcda4ba9
	I0229 19:11:02.704423    6464 command_runner.go:130] > 52fe82a87fa8
	I0229 19:11:02.704955    6464 command_runner.go:130] > b8c8786727c5
	I0229 19:11:02.704955    6464 command_runner.go:130] > 1ae101209a8f
	I0229 19:11:02.704955    6464 command_runner.go:130] > 2a191aae0ba2
	I0229 19:11:02.704955    6464 command_runner.go:130] > d9fcf1cc8d35
	I0229 19:11:02.704955    6464 command_runner.go:130] > 7f9c423f4482
	I0229 19:11:02.705834    6464 docker.go:483] Stopping containers: [7be33bccda15 f4d0b06ecf4a 8f42b1a35229 f53a12cbddd5 92f6a9511f4f 2f8a25ce65da 39324e665418 779c3df146b2 9245396d3b64 ea0adcda4ba9 52fe82a87fa8 b8c8786727c5 1ae101209a8f 2a191aae0ba2 d9fcf1cc8d35 7f9c423f4482]
	I0229 19:11:02.713169    6464 ssh_runner.go:195] Run: docker stop 7be33bccda15 f4d0b06ecf4a 8f42b1a35229 f53a12cbddd5 92f6a9511f4f 2f8a25ce65da 39324e665418 779c3df146b2 9245396d3b64 ea0adcda4ba9 52fe82a87fa8 b8c8786727c5 1ae101209a8f 2a191aae0ba2 d9fcf1cc8d35 7f9c423f4482
	I0229 19:11:02.736641    6464 command_runner.go:130] > 7be33bccda15
	I0229 19:11:02.736641    6464 command_runner.go:130] > f4d0b06ecf4a
	I0229 19:11:02.736641    6464 command_runner.go:130] > 8f42b1a35229
	I0229 19:11:02.736641    6464 command_runner.go:130] > f53a12cbddd5
	I0229 19:11:02.736641    6464 command_runner.go:130] > 92f6a9511f4f
	I0229 19:11:02.736641    6464 command_runner.go:130] > 2f8a25ce65da
	I0229 19:11:02.737505    6464 command_runner.go:130] > 39324e665418
	I0229 19:11:02.737588    6464 command_runner.go:130] > 779c3df146b2
	I0229 19:11:02.737588    6464 command_runner.go:130] > 9245396d3b64
	I0229 19:11:02.737588    6464 command_runner.go:130] > ea0adcda4ba9
	I0229 19:11:02.737588    6464 command_runner.go:130] > 52fe82a87fa8
	I0229 19:11:02.737658    6464 command_runner.go:130] > b8c8786727c5
	I0229 19:11:02.737677    6464 command_runner.go:130] > 1ae101209a8f
	I0229 19:11:02.737677    6464 command_runner.go:130] > 2a191aae0ba2
	I0229 19:11:02.737677    6464 command_runner.go:130] > d9fcf1cc8d35
	I0229 19:11:02.737677    6464 command_runner.go:130] > 7f9c423f4482
	I0229 19:11:02.750510    6464 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0229 19:11:02.788548    6464 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:11:02.806693    6464 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0229 19:11:02.806693    6464 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0229 19:11:02.806693    6464 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0229 19:11:02.806693    6464 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:11:02.807064    6464 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:11:02.816466    6464 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:11:02.833778    6464 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0229 19:11:02.833882    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 19:11:03.154781    6464 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:11:03.154781    6464 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0229 19:11:03.154781    6464 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0229 19:11:03.154781    6464 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:11:03.154781    6464 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0229 19:11:03.154781    6464 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:11:03.154781    6464 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0229 19:11:03.154781    6464 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0229 19:11:03.154781    6464 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:11:03.154781    6464 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:11:03.154781    6464 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:11:03.154781    6464 command_runner.go:130] > [certs] Using the existing "sa" key
	I0229 19:11:03.154781    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 19:11:03.854288    6464 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:11:03.854288    6464 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:11:03.854410    6464 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:11:03.854410    6464 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:11:03.854410    6464 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:11:03.854499    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0229 19:11:04.141768    6464 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:11:04.141908    6464 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:11:04.141908    6464 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 19:11:04.141908    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 19:11:04.240428    6464 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:11:04.241456    6464 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:11:04.241456    6464 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:11:04.241456    6464 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:11:04.241456    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0229 19:11:04.331413    6464 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:11:04.331963    6464 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:11:04.341196    6464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:11:04.854849    6464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:11:05.340683    6464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:11:05.846934    6464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:11:06.357447    6464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:11:06.394180    6464 command_runner.go:130] > 1892
	I0229 19:11:06.394278    6464 api_server.go:72] duration metric: took 2.0622003s to wait for apiserver process to appear ...
	I0229 19:11:06.394278    6464 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:11:06.394278    6464 api_server.go:253] Checking apiserver healthz at https://172.26.52.109:8443/healthz ...
	I0229 19:11:10.316414    6464 api_server.go:279] https://172.26.52.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 19:11:10.317063    6464 api_server.go:103] status: https://172.26.52.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 19:11:10.317063    6464 api_server.go:253] Checking apiserver healthz at https://172.26.52.109:8443/healthz ...
	I0229 19:11:10.377671    6464 api_server.go:279] https://172.26.52.109:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 19:11:10.377671    6464 api_server.go:103] status: https://172.26.52.109:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 19:11:10.402293    6464 api_server.go:253] Checking apiserver healthz at https://172.26.52.109:8443/healthz ...
	I0229 19:11:10.425223    6464 api_server.go:279] https://172.26.52.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 19:11:10.425223    6464 api_server.go:103] status: https://172.26.52.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 19:11:10.908335    6464 api_server.go:253] Checking apiserver healthz at https://172.26.52.109:8443/healthz ...
	I0229 19:11:10.920606    6464 api_server.go:279] https://172.26.52.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 19:11:10.921018    6464 api_server.go:103] status: https://172.26.52.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 19:11:11.403241    6464 api_server.go:253] Checking apiserver healthz at https://172.26.52.109:8443/healthz ...
	I0229 19:11:11.414740    6464 api_server.go:279] https://172.26.52.109:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0229 19:11:11.414831    6464 api_server.go:103] status: https://172.26.52.109:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0229 19:11:11.896307    6464 api_server.go:253] Checking apiserver healthz at https://172.26.52.109:8443/healthz ...
	I0229 19:11:11.907755    6464 api_server.go:279] https://172.26.52.109:8443/healthz returned 200:
	ok
	I0229 19:11:11.908098    6464 round_trippers.go:463] GET https://172.26.52.109:8443/version
	I0229 19:11:11.908132    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:11.908162    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:11.908162    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:11.922393    6464 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0229 19:11:11.922393    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:11.922393    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:11.922393    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:11.922393    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:11.922393    6464 round_trippers.go:580]     Content-Length: 264
	I0229 19:11:11.922393    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:12 GMT
	I0229 19:11:11.922393    6464 round_trippers.go:580]     Audit-Id: ad11915f-00c9-442c-904b-63a79e4ad2b4
	I0229 19:11:11.922393    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:11.922393    6464 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 19:11:11.922393    6464 api_server.go:141] control plane version: v1.28.4
	I0229 19:11:11.922393    6464 api_server.go:131] duration metric: took 5.5278084s to wait for apiserver health ...
	I0229 19:11:11.922393    6464 cni.go:84] Creating CNI manager for ""
	I0229 19:11:11.922393    6464 cni.go:136] 3 nodes found, recommending kindnet
	I0229 19:11:11.924070    6464 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0229 19:11:11.935973    6464 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 19:11:11.944388    6464 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 19:11:11.944388    6464 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 19:11:11.944388    6464 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 19:11:11.944486    6464 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 19:11:11.944486    6464 command_runner.go:130] > Access: 2024-02-29 19:09:50.700291300 +0000
	I0229 19:11:11.944486    6464 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 19:11:11.944486    6464 command_runner.go:130] > Change: 2024-02-29 19:09:39.251000000 +0000
	I0229 19:11:11.944486    6464 command_runner.go:130] >  Birth: -
	I0229 19:11:11.944637    6464 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 19:11:11.944637    6464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 19:11:12.004085    6464 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 19:11:13.557072    6464 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 19:11:13.557328    6464 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 19:11:13.557328    6464 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 19:11:13.557328    6464 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 19:11:13.557406    6464 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5521392s)
	I0229 19:11:13.557493    6464 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:11:13.557717    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods
	I0229 19:11:13.557804    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:13.557804    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:13.557804    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:13.564171    6464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 19:11:13.564171    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:13.564171    6464 round_trippers.go:580]     Audit-Id: 4aad6fe2-0388-42a5-a0ca-be3104a22f4a
	I0229 19:11:13.564171    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:13.564171    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:13.564171    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:13.564171    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:13.564171    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:13 GMT
	I0229 19:11:13.566161    6464 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1635"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83525 chars]
	I0229 19:11:13.572176    6464 system_pods.go:59] 12 kube-system pods found
	I0229 19:11:13.572176    6464 system_pods.go:61] "coredns-5dd5756b68-5qhb2" [cb647b50-f478-4265-9ff1-b66190c46393] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 19:11:13.572176    6464 system_pods.go:61] "etcd-multinode-421600" [a57a6b03-e79b-4fcd-8750-480d46e6feb7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0229 19:11:13.572176    6464 system_pods.go:61] "kindnet-447dh" [c2052338-6892-465a-b1d4-c4247c9ac2a0] Running
	I0229 19:11:13.572176    6464 system_pods.go:61] "kindnet-7nzdd" [0ddba541-4eca-46f3-a45a-35433dcefe6c] Running
	I0229 19:11:13.572176    6464 system_pods.go:61] "kindnet-zblbg" [1ea7f301-b0fb-4708-85d2-d1256cdda09c] Running
	I0229 19:11:13.572176    6464 system_pods.go:61] "kube-apiserver-multinode-421600" [456b1ada-afd0-416c-a95f-71bea88e161d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 19:11:13.572176    6464 system_pods.go:61] "kube-controller-manager-multinode-421600" [a41ee888-f6df-43d4-9799-67a9ef0b6c87] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0229 19:11:13.572176    6464 system_pods.go:61] "kube-proxy-7c7xc" [6f8e7fe9-d8e5-47ca-80fd-7e5f7ae43140] Running
	I0229 19:11:13.572176    6464 system_pods.go:61] "kube-proxy-fpk6m" [4c99c6ec-5ab0-434d-b5a9-cb24b10f8bbf] Running
	I0229 19:11:13.572176    6464 system_pods.go:61] "kube-proxy-rhg8l" [58dfdc35-3e50-486d-b7a7-5bae65934cd5] Running
	I0229 19:11:13.572176    6464 system_pods.go:61] "kube-scheduler-multinode-421600" [6742b97c-a3db-4fca-8da3-54fcde6d405a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 19:11:13.572176    6464 system_pods.go:61] "storage-provisioner" [98ad07fa-8673-4933-9197-b7ceb8a3afbc] Running
	I0229 19:11:13.572176    6464 system_pods.go:74] duration metric: took 14.6818ms to wait for pod list to return data ...
	I0229 19:11:13.572176    6464 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:11:13.572176    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes
	I0229 19:11:13.572176    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:13.572176    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:13.572176    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:13.576172    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:13.576172    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:13.576172    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:13 GMT
	I0229 19:11:13.576172    6464 round_trippers.go:580]     Audit-Id: 70321fa9-8f16-412c-aa8c-3f2c4679654d
	I0229 19:11:13.576172    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:13.576172    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:13.576172    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:13.576172    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:13.577159    6464 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1637"},"items":[{"metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14855 chars]
	I0229 19:11:13.578152    6464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:11:13.578152    6464 node_conditions.go:123] node cpu capacity is 2
	I0229 19:11:13.578152    6464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:11:13.578152    6464 node_conditions.go:123] node cpu capacity is 2
	I0229 19:11:13.578152    6464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:11:13.578152    6464 node_conditions.go:123] node cpu capacity is 2
	I0229 19:11:13.578152    6464 node_conditions.go:105] duration metric: took 5.9761ms to run NodePressure ...
	I0229 19:11:13.578152    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 19:11:13.762687    6464 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0229 19:11:13.830701    6464 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0229 19:11:13.832726    6464 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0229 19:11:13.832726    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0229 19:11:13.833691    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:13.833691    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:13.833691    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:13.837731    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:13.837979    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:13.837979    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:13.837979    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:13.837979    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:13.837979    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:14 GMT
	I0229 19:11:13.837979    6464 round_trippers.go:580]     Audit-Id: cfc8a9a4-fe7c-4997-9b5b-ab190ad37ba3
	I0229 19:11:13.838096    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:13.838859    6464 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1639"},"items":[{"metadata":{"name":"etcd-multinode-421600","namespace":"kube-system","uid":"a57a6b03-e79b-4fcd-8750-480d46e6feb7","resourceVersion":"1565","creationTimestamp":"2024-02-29T19:11:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.109:2379","kubernetes.io/config.hash":"ffd246c3f34c2bcd65e63e05d5465206","kubernetes.io/config.mirror":"ffd246c3f34c2bcd65e63e05d5465206","kubernetes.io/config.seen":"2024-02-29T19:11:04.922860790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:11:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 29350 chars]
	I0229 19:11:13.841602    6464 kubeadm.go:787] kubelet initialised
	I0229 19:11:13.841694    6464 kubeadm.go:788] duration metric: took 8.9674ms waiting for restarted kubelet to initialise ...
	I0229 19:11:13.841694    6464 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:11:13.841881    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods
	I0229 19:11:13.841970    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:13.841970    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:13.841970    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:13.847637    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:11:13.847874    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:13.847874    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:13.847874    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:13.847874    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:14 GMT
	I0229 19:11:13.847874    6464 round_trippers.go:580]     Audit-Id: c0d9bbf0-d4f2-4ba3-b6d6-6f6348fbebd3
	I0229 19:11:13.847874    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:13.847874    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:13.849022    6464 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1639"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83525 chars]
	I0229 19:11:13.851571    6464 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:13.851571    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:13.851571    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:13.851571    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:13.851571    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:13.855620    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:13.855620    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:13.855620    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:13.855620    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:13.855620    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:14 GMT
	I0229 19:11:13.855620    6464 round_trippers.go:580]     Audit-Id: 7fa7c384-809c-40c1-a215-fb5d0078980e
	I0229 19:11:13.855620    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:13.855620    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:13.855620    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:13.856548    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:13.856548    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:13.856548    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:13.856548    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:13.858563    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:11:13.858563    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:13.858563    6464 round_trippers.go:580]     Audit-Id: 9fcb91e8-0b05-4562-adc0-2fc02f5d8376
	I0229 19:11:13.858563    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:13.858563    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:13.858563    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:13.858563    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:13.858563    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:14 GMT
	I0229 19:11:13.859572    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:13.859572    6464 pod_ready.go:97] node "multinode-421600" hosting pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-421600" has status "Ready":"False"
	I0229 19:11:13.859572    6464 pod_ready.go:81] duration metric: took 8.0012ms waiting for pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace to be "Ready" ...
	E0229 19:11:13.859572    6464 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-421600" hosting pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-421600" has status "Ready":"False"
	I0229 19:11:13.859572    6464 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:13.859572    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-421600
	I0229 19:11:13.859572    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:13.859572    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:13.859572    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:13.862547    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:11:13.862547    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:13.862547    6464 round_trippers.go:580]     Audit-Id: ab382dc6-3b44-46a9-af15-054df7366e20
	I0229 19:11:13.862547    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:13.862547    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:13.862547    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:13.862547    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:13.862547    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:14 GMT
	I0229 19:11:13.863579    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-421600","namespace":"kube-system","uid":"a57a6b03-e79b-4fcd-8750-480d46e6feb7","resourceVersion":"1565","creationTimestamp":"2024-02-29T19:11:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.109:2379","kubernetes.io/config.hash":"ffd246c3f34c2bcd65e63e05d5465206","kubernetes.io/config.mirror":"ffd246c3f34c2bcd65e63e05d5465206","kubernetes.io/config.seen":"2024-02-29T19:11:04.922860790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:11:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6087 chars]
	I0229 19:11:13.863579    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:13.863579    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:13.863579    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:13.863579    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:13.866576    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:11:13.866576    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:13.866576    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:13.866576    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:13.866576    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:13.866576    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:14 GMT
	I0229 19:11:13.866576    6464 round_trippers.go:580]     Audit-Id: 4758347d-1f3f-4544-b228-31f05af6a27b
	I0229 19:11:13.866576    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:13.866576    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:13.867549    6464 pod_ready.go:97] node "multinode-421600" hosting pod "etcd-multinode-421600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-421600" has status "Ready":"False"
	I0229 19:11:13.867549    6464 pod_ready.go:81] duration metric: took 7.9762ms waiting for pod "etcd-multinode-421600" in "kube-system" namespace to be "Ready" ...
	E0229 19:11:13.867549    6464 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-421600" hosting pod "etcd-multinode-421600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-421600" has status "Ready":"False"
	I0229 19:11:13.867549    6464 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:13.867549    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-421600
	I0229 19:11:13.867549    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:13.867549    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:13.867549    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:13.870578    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:13.870578    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:13.870578    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:13.870578    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:13.870578    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:13.870578    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:14 GMT
	I0229 19:11:13.870578    6464 round_trippers.go:580]     Audit-Id: 39608950-14a5-4887-bd42-7576df539f7b
	I0229 19:11:13.870578    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:13.870578    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-421600","namespace":"kube-system","uid":"456b1ada-afd0-416c-a95f-71bea88e161d","resourceVersion":"1566","creationTimestamp":"2024-02-29T19:11:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.52.109:8443","kubernetes.io/config.hash":"aec335819ecb0b3c60068e2ed02eb80d","kubernetes.io/config.mirror":"aec335819ecb0b3c60068e2ed02eb80d","kubernetes.io/config.seen":"2024-02-29T19:11:04.922862090Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:11:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7644 chars]
	I0229 19:11:13.871548    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:13.871548    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:13.871548    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:13.871548    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:13.874568    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:13.874568    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:13.874797    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:13.874797    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:14 GMT
	I0229 19:11:13.874797    6464 round_trippers.go:580]     Audit-Id: 63583170-90d8-410d-87b1-bcb39ab07458
	I0229 19:11:13.874797    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:13.874797    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:13.874797    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:13.874895    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:13.875346    6464 pod_ready.go:97] node "multinode-421600" hosting pod "kube-apiserver-multinode-421600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-421600" has status "Ready":"False"
	I0229 19:11:13.875346    6464 pod_ready.go:81] duration metric: took 7.796ms waiting for pod "kube-apiserver-multinode-421600" in "kube-system" namespace to be "Ready" ...
	E0229 19:11:13.875346    6464 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-421600" hosting pod "kube-apiserver-multinode-421600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-421600" has status "Ready":"False"
	I0229 19:11:13.875346    6464 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:13.875526    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-421600
	I0229 19:11:13.875526    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:13.875559    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:13.875559    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:13.877665    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:11:13.877665    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:13.877665    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:14 GMT
	I0229 19:11:13.877665    6464 round_trippers.go:580]     Audit-Id: 9738f7e4-2317-48a1-a4be-4dcae8432b98
	I0229 19:11:13.877665    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:13.877665    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:13.877665    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:13.877665    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:13.877665    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-421600","namespace":"kube-system","uid":"a41ee888-f6df-43d4-9799-67a9ef0b6c87","resourceVersion":"1567","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dabef371df5cd2a8b883d06621dfc6bd","kubernetes.io/config.mirror":"dabef371df5cd2a8b883d06621dfc6bd","kubernetes.io/config.seen":"2024-02-29T18:50:38.626332146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7437 chars]
	I0229 19:11:13.965670    6464 request.go:629] Waited for 87.0012ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:13.965670    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:13.965670    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:13.965670    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:13.965670    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:13.969512    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:13.969512    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:13.970373    6464 round_trippers.go:580]     Audit-Id: 0c91a07b-c6d6-42a5-bf3f-4edb79e726eb
	I0229 19:11:13.970373    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:13.970373    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:13.970373    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:13.970373    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:13.970373    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:14 GMT
	I0229 19:11:13.970609    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:13.971238    6464 pod_ready.go:97] node "multinode-421600" hosting pod "kube-controller-manager-multinode-421600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-421600" has status "Ready":"False"
	I0229 19:11:13.971346    6464 pod_ready.go:81] duration metric: took 95.9326ms waiting for pod "kube-controller-manager-multinode-421600" in "kube-system" namespace to be "Ready" ...
	E0229 19:11:13.971440    6464 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-421600" hosting pod "kube-controller-manager-multinode-421600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-421600" has status "Ready":"False"
	I0229 19:11:13.971440    6464 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7c7xc" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:14.168537    6464 request.go:629] Waited for 196.6891ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7c7xc
	I0229 19:11:14.169050    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7c7xc
	I0229 19:11:14.169142    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:14.169210    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:14.169242    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:14.173727    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:14.173727    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:14.173727    6464 round_trippers.go:580]     Audit-Id: 0d019bed-d3ce-4732-b99a-4ac07b2c11ec
	I0229 19:11:14.173727    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:14.173727    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:14.173727    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:14.173727    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:14.174263    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:14 GMT
	I0229 19:11:14.174837    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7c7xc","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f8e7fe9-d8e5-47ca-80fd-7e5f7ae43140","resourceVersion":"579","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0229 19:11:14.371966    6464 request.go:629] Waited for 196.0144ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:11:14.372352    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:11:14.372441    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:14.372441    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:14.372441    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:14.378553    6464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 19:11:14.378692    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:14.378692    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:14.378692    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:14.378692    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:14.378692    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:14.378692    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:14 GMT
	I0229 19:11:14.378692    6464 round_trippers.go:580]     Audit-Id: 9715a19c-79e3-49a6-9f88-ad232da6f514
	I0229 19:11:14.378794    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"1486","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_07_16_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3818 chars]
	I0229 19:11:14.378794    6464 pod_ready.go:92] pod "kube-proxy-7c7xc" in "kube-system" namespace has status "Ready":"True"
	I0229 19:11:14.378794    6464 pod_ready.go:81] duration metric: took 407.3309ms waiting for pod "kube-proxy-7c7xc" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:14.378794    6464 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fpk6m" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:14.558616    6464 request.go:629] Waited for 179.8124ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpk6m
	I0229 19:11:14.558616    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpk6m
	I0229 19:11:14.558616    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:14.558616    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:14.558616    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:14.564205    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:11:14.564205    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:14.564205    6464 round_trippers.go:580]     Audit-Id: 37bb03cb-9d32-466d-b9cb-c9d8dac1e7b0
	I0229 19:11:14.564205    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:14.564205    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:14.564205    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:14.564205    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:14.564205    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:14 GMT
	I0229 19:11:14.565437    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fpk6m","generateName":"kube-proxy-","namespace":"kube-system","uid":"4c99c6ec-5ab0-434d-b5a9-cb24b10f8bbf","resourceVersion":"1574","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0229 19:11:14.761781    6464 request.go:629] Waited for 194.3452ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:14.761781    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:14.762139    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:14.762139    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:14.762139    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:14.765360    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:14.765360    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:14.765360    6464 round_trippers.go:580]     Audit-Id: 34d2d6fc-5b43-47fe-8c09-7b2fdca45ce8
	I0229 19:11:14.765360    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:14.765360    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:14.765360    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:14.765360    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:14.765360    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:14 GMT
	I0229 19:11:14.766599    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:14.767017    6464 pod_ready.go:97] node "multinode-421600" hosting pod "kube-proxy-fpk6m" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-421600" has status "Ready":"False"
	I0229 19:11:14.767126    6464 pod_ready.go:81] duration metric: took 388.3101ms waiting for pod "kube-proxy-fpk6m" in "kube-system" namespace to be "Ready" ...
	E0229 19:11:14.767126    6464 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-421600" hosting pod "kube-proxy-fpk6m" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-421600" has status "Ready":"False"
	I0229 19:11:14.767126    6464 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rhg8l" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:14.965247    6464 request.go:629] Waited for 198.0098ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rhg8l
	I0229 19:11:14.965247    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rhg8l
	I0229 19:11:14.965247    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:14.965247    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:14.965247    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:14.969248    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:14.969248    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:14.969248    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:14.969248    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:14.969248    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:15 GMT
	I0229 19:11:14.969248    6464 round_trippers.go:580]     Audit-Id: 046ad507-b769-4978-badd-6cd3be91837a
	I0229 19:11:14.969248    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:14.969248    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:14.969248    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rhg8l","generateName":"kube-proxy-","namespace":"kube-system","uid":"58dfdc35-3e50-486d-b7a7-5bae65934cd5","resourceVersion":"1488","creationTimestamp":"2024-02-29T18:57:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:57:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0229 19:11:15.172546    6464 request.go:629] Waited for 199.2159ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:11:15.172546    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:11:15.172546    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:15.172546    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:15.172546    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:15.177130    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:15.177130    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:15.177130    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:15.177130    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:15 GMT
	I0229 19:11:15.177130    6464 round_trippers.go:580]     Audit-Id: 66578984-217c-42e2-bf1a-294224c1448e
	I0229 19:11:15.177130    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:15.177229    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:15.177229    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:15.177893    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"096122b8-0719-4361-9b63-57130df92d29","resourceVersion":"1501","creationTimestamp":"2024-02-29T19:07:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_07_16_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3635 chars]
	I0229 19:11:15.178099    6464 pod_ready.go:92] pod "kube-proxy-rhg8l" in "kube-system" namespace has status "Ready":"True"
	I0229 19:11:15.178099    6464 pod_ready.go:81] duration metric: took 410.9508ms waiting for pod "kube-proxy-rhg8l" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:15.178099    6464 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:15.361359    6464 request.go:629] Waited for 183.2493ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-421600
	I0229 19:11:15.361359    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-421600
	I0229 19:11:15.361359    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:15.361647    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:15.361647    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:15.367370    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:11:15.367370    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:15.367370    6464 round_trippers.go:580]     Audit-Id: 7a94cff2-b229-4d54-8527-3353e7fd4621
	I0229 19:11:15.367370    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:15.367809    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:15.367809    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:15.367809    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:15.367809    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:15 GMT
	I0229 19:11:15.368073    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-421600","namespace":"kube-system","uid":"6742b97c-a3db-4fca-8da3-54fcde6d405a","resourceVersion":"1564","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a2c94c0a4c322f0bf7fcafad0430344f","kubernetes.io/config.mirror":"a2c94c0a4c322f0bf7fcafad0430344f","kubernetes.io/config.seen":"2024-02-29T18:50:38.626333146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5149 chars]
	I0229 19:11:15.566526    6464 request.go:629] Waited for 197.0819ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:15.566876    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:15.566876    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:15.566876    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:15.566876    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:15.570469    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:15.571442    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:15.571490    6464 round_trippers.go:580]     Audit-Id: c83c82e0-768f-4d51-b583-fb1481f18a4a
	I0229 19:11:15.571490    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:15.571490    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:15.571490    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:15.571490    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:15.571490    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:15 GMT
	I0229 19:11:15.571704    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:15.572229    6464 pod_ready.go:97] node "multinode-421600" hosting pod "kube-scheduler-multinode-421600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-421600" has status "Ready":"False"
	I0229 19:11:15.572229    6464 pod_ready.go:81] duration metric: took 394.108ms waiting for pod "kube-scheduler-multinode-421600" in "kube-system" namespace to be "Ready" ...
	E0229 19:11:15.572229    6464 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-421600" hosting pod "kube-scheduler-multinode-421600" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-421600" has status "Ready":"False"
	I0229 19:11:15.572229    6464 pod_ready.go:38] duration metric: took 1.7303472s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:11:15.572355    6464 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:11:15.589310    6464 command_runner.go:130] > -16
	I0229 19:11:15.589550    6464 ops.go:34] apiserver oom_adj: -16
	I0229 19:11:15.589550    6464 kubeadm.go:640] restartCluster took 12.990816s
	I0229 19:11:15.589550    6464 kubeadm.go:406] StartCluster complete in 13.0522454s
	I0229 19:11:15.589550    6464 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:11:15.589767    6464 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:11:15.591328    6464 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:11:15.592813    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:11:15.592813    6464 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:11:15.593968    6464 out.go:177] * Enabled addons: 
	I0229 19:11:15.593383    6464 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:11:15.594667    6464 addons.go:505] enable addons completed in 1.854ms: enabled=[]
	I0229 19:11:15.604825    6464 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:11:15.605833    6464 kapi.go:59] client config for multinode-421600: &rest.Config{Host:"https://172.26.52.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 19:11:15.605833    6464 cert_rotation.go:137] Starting client certificate rotation controller
	I0229 19:11:15.606827    6464 round_trippers.go:463] GET https://172.26.52.109:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 19:11:15.606827    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:15.606827    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:15.606827    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:15.619763    6464 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0229 19:11:15.619763    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:15.619763    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:15 GMT
	I0229 19:11:15.619763    6464 round_trippers.go:580]     Audit-Id: de6d60fe-f501-4f11-8c24-10cf63042c37
	I0229 19:11:15.619763    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:15.619763    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:15.619763    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:15.619763    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:15.619763    6464 round_trippers.go:580]     Content-Length: 292
	I0229 19:11:15.619763    6464 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9192a23-827d-4114-8861-df907bfdc0ef","resourceVersion":"1638","creationTimestamp":"2024-02-29T18:50:38Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 19:11:15.620425    6464 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-421600" context rescaled to 1 replicas
	I0229 19:11:15.620425    6464 start.go:223] Will wait 6m0s for node &{Name: IP:172.26.52.109 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 19:11:15.621321    6464 out.go:177] * Verifying Kubernetes components...
	I0229 19:11:15.631559    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:11:15.716265    6464 command_runner.go:130] > apiVersion: v1
	I0229 19:11:15.716265    6464 command_runner.go:130] > data:
	I0229 19:11:15.716265    6464 command_runner.go:130] >   Corefile: |
	I0229 19:11:15.716265    6464 command_runner.go:130] >     .:53 {
	I0229 19:11:15.716265    6464 command_runner.go:130] >         log
	I0229 19:11:15.716265    6464 command_runner.go:130] >         errors
	I0229 19:11:15.717174    6464 command_runner.go:130] >         health {
	I0229 19:11:15.717174    6464 command_runner.go:130] >            lameduck 5s
	I0229 19:11:15.717174    6464 command_runner.go:130] >         }
	I0229 19:11:15.717174    6464 command_runner.go:130] >         ready
	I0229 19:11:15.717174    6464 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0229 19:11:15.717174    6464 command_runner.go:130] >            pods insecure
	I0229 19:11:15.717269    6464 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0229 19:11:15.717269    6464 command_runner.go:130] >            ttl 30
	I0229 19:11:15.717291    6464 command_runner.go:130] >         }
	I0229 19:11:15.717291    6464 command_runner.go:130] >         prometheus :9153
	I0229 19:11:15.717291    6464 command_runner.go:130] >         hosts {
	I0229 19:11:15.717291    6464 command_runner.go:130] >            172.26.48.1 host.minikube.internal
	I0229 19:11:15.717291    6464 command_runner.go:130] >            fallthrough
	I0229 19:11:15.717291    6464 command_runner.go:130] >         }
	I0229 19:11:15.717291    6464 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0229 19:11:15.717372    6464 command_runner.go:130] >            max_concurrent 1000
	I0229 19:11:15.717372    6464 command_runner.go:130] >         }
	I0229 19:11:15.717506    6464 command_runner.go:130] >         cache 30
	I0229 19:11:15.717506    6464 command_runner.go:130] >         loop
	I0229 19:11:15.717506    6464 command_runner.go:130] >         reload
	I0229 19:11:15.717582    6464 command_runner.go:130] >         loadbalance
	I0229 19:11:15.717582    6464 command_runner.go:130] >     }
	I0229 19:11:15.717582    6464 command_runner.go:130] > kind: ConfigMap
	I0229 19:11:15.717582    6464 command_runner.go:130] > metadata:
	I0229 19:11:15.717582    6464 command_runner.go:130] >   creationTimestamp: "2024-02-29T18:50:38Z"
	I0229 19:11:15.717582    6464 command_runner.go:130] >   name: coredns
	I0229 19:11:15.717582    6464 command_runner.go:130] >   namespace: kube-system
	I0229 19:11:15.717582    6464 command_runner.go:130] >   resourceVersion: "339"
	I0229 19:11:15.717582    6464 command_runner.go:130] >   uid: 02fa6c60-1e04-4f3a-a567-42fb00116f24
	I0229 19:11:15.721278    6464 node_ready.go:35] waiting up to 6m0s for node "multinode-421600" to be "Ready" ...
	I0229 19:11:15.721591    6464 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 19:11:15.771426    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:15.771426    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:15.771426    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:15.771426    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:15.775510    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:15.775510    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:15.775510    6464 round_trippers.go:580]     Audit-Id: 57ad9844-7dc4-4b46-a074-94b664f04301
	I0229 19:11:15.775510    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:15.775510    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:15.775510    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:15.775510    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:15.775510    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:15 GMT
	I0229 19:11:15.776376    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:16.224542    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:16.224542    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:16.224672    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:16.224672    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:16.232259    6464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 19:11:16.232259    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:16.232259    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:16.232259    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:16.232259    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:16.232259    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:16.232259    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:16 GMT
	I0229 19:11:16.232259    6464 round_trippers.go:580]     Audit-Id: c79e4f4f-d8cb-42d0-891c-0fa7f8c59209
	I0229 19:11:16.232818    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:16.725987    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:16.726058    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:16.726058    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:16.726058    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:16.733671    6464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 19:11:16.733671    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:16.733671    6464 round_trippers.go:580]     Audit-Id: 5d22b953-6c7e-47fb-a635-d4e830868415
	I0229 19:11:16.733671    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:16.733671    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:16.733671    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:16.733671    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:16.733671    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:16 GMT
	I0229 19:11:16.734371    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:17.228851    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:17.228922    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:17.228922    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:17.228922    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:17.233488    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:17.233830    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:17.233830    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:17 GMT
	I0229 19:11:17.233830    6464 round_trippers.go:580]     Audit-Id: ce9cc4dd-4936-43af-9c19-6113e40b98f2
	I0229 19:11:17.233916    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:17.233916    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:17.233916    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:17.233916    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:17.233916    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:17.730253    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:17.730253    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:17.730253    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:17.730253    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:17.734883    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:17.734883    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:17.734883    6464 round_trippers.go:580]     Audit-Id: 3dc4df5b-8c32-4722-a295-538760fb2ff0
	I0229 19:11:17.734883    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:17.734883    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:17.734883    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:17.734883    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:17.734883    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:17 GMT
	I0229 19:11:17.734883    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:17.736029    6464 node_ready.go:58] node "multinode-421600" has status "Ready":"False"
	I0229 19:11:18.221987    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:18.222075    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:18.222075    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:18.222075    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:18.226462    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:18.226462    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:18.226820    6464 round_trippers.go:580]     Audit-Id: d9fc306b-5541-450e-93bd-5098ef36080d
	I0229 19:11:18.226820    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:18.226820    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:18.226820    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:18.226820    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:18.226820    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:18 GMT
	I0229 19:11:18.227090    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:18.730499    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:18.730499    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:18.730499    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:18.730857    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:18.735079    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:18.735172    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:18.735172    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:18.735172    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:18.735172    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:18.735268    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:18 GMT
	I0229 19:11:18.735268    6464 round_trippers.go:580]     Audit-Id: 198f06a6-faa9-44a6-ba55-16b755912df4
	I0229 19:11:18.735268    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:18.735684    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:19.230618    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:19.230704    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:19.230789    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:19.230789    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:19.234247    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:19.234247    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:19.234247    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:19.234247    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:19.234247    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:19 GMT
	I0229 19:11:19.234247    6464 round_trippers.go:580]     Audit-Id: 20b7044d-ee30-4e2b-99fb-92f56f9a073c
	I0229 19:11:19.234247    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:19.234247    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:19.235127    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:19.731085    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:19.731170    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:19.731170    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:19.731170    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:19.734438    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:19.734438    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:19.734438    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:19 GMT
	I0229 19:11:19.734687    6464 round_trippers.go:580]     Audit-Id: 9fb3070f-f9dc-431b-b146-e240d8536446
	I0229 19:11:19.734687    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:19.734687    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:19.734687    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:19.734687    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:19.734978    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:20.228907    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:20.228974    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:20.229038    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:20.229038    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:20.236173    6464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 19:11:20.236173    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:20.236173    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:20.236869    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:20.236939    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:20 GMT
	I0229 19:11:20.236939    6464 round_trippers.go:580]     Audit-Id: af053c6d-d883-4a23-b93d-81ae36e50ea1
	I0229 19:11:20.236939    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:20.236939    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:20.237191    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1553","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0229 19:11:20.237234    6464 node_ready.go:58] node "multinode-421600" has status "Ready":"False"
	I0229 19:11:20.727017    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:20.727105    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:20.727105    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:20.727105    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:20.731836    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:20.731836    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:20.731836    6464 round_trippers.go:580]     Audit-Id: e30098a8-a562-4adb-9a3d-e3decea60e6d
	I0229 19:11:20.731912    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:20.731912    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:20.731912    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:20.731912    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:20.731912    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:20 GMT
	I0229 19:11:20.732141    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:20.732766    6464 node_ready.go:49] node "multinode-421600" has status "Ready":"True"
	I0229 19:11:20.732766    6464 node_ready.go:38] duration metric: took 5.0112105s waiting for node "multinode-421600" to be "Ready" ...
	I0229 19:11:20.732766    6464 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:11:20.732905    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods
	I0229 19:11:20.732905    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:20.732905    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:20.732905    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:20.741311    6464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 19:11:20.741970    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:20.741970    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:20 GMT
	I0229 19:11:20.741970    6464 round_trippers.go:580]     Audit-Id: e61154ed-ac09-4307-ad79-c1b55b30060e
	I0229 19:11:20.741970    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:20.741970    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:20.741970    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:20.741970    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:20.744015    6464 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1652"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83263 chars]
	I0229 19:11:20.749506    6464 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:20.749506    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:20.749506    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:20.749506    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:20.749506    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:20.752094    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:11:20.753099    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:20.753099    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:20.753099    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:20.753099    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:20.753099    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:20 GMT
	I0229 19:11:20.753099    6464 round_trippers.go:580]     Audit-Id: 670215a5-eb62-4b10-8ad2-51cd832869d0
	I0229 19:11:20.753099    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:20.753532    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:20.753532    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:20.754069    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:20.754103    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:20.754103    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:20.757286    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:20.757286    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:20.757286    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:20.757286    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:20.757286    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:20.757286    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:20.757402    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:20 GMT
	I0229 19:11:20.757402    6464 round_trippers.go:580]     Audit-Id: 3b45edc0-e425-470f-ae7d-a4e8a9213099
	I0229 19:11:20.757547    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:21.260296    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:21.260418    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:21.260418    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:21.260418    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:21.263728    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:21.263728    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:21.263728    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:21.263728    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:21.263728    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:21 GMT
	I0229 19:11:21.263728    6464 round_trippers.go:580]     Audit-Id: 6fd19e9c-577d-4bed-ba14-ec20369046fc
	I0229 19:11:21.263728    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:21.263728    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:21.265152    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:21.265831    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:21.265903    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:21.265903    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:21.265903    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:21.271649    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:11:21.271649    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:21.272053    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:21 GMT
	I0229 19:11:21.272053    6464 round_trippers.go:580]     Audit-Id: a96922f6-e10b-4378-b62c-b988da66f371
	I0229 19:11:21.272053    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:21.272053    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:21.272087    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:21.272087    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:21.272248    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:21.761123    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:21.761123    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:21.761123    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:21.761123    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:21.764719    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:21.764719    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:21.765340    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:21.765340    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:21.765340    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:21.765340    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:21.765340    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:21 GMT
	I0229 19:11:21.765340    6464 round_trippers.go:580]     Audit-Id: d0b1def0-dfdd-4be6-9741-bc986ee409c2
	I0229 19:11:21.765439    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:21.766137    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:21.766137    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:21.766137    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:21.766137    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:21.769714    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:21.769714    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:21.769714    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:21.769714    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:21.769714    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:21 GMT
	I0229 19:11:21.769714    6464 round_trippers.go:580]     Audit-Id: e01ce9dd-55d7-4a10-893e-80b6da7dc934
	I0229 19:11:21.769714    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:21.769714    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:21.770178    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:22.263914    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:22.264059    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:22.264059    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:22.264059    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:22.268430    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:22.268577    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:22.268577    6464 round_trippers.go:580]     Audit-Id: 8b3989d1-8338-48f7-b418-3f562755b5e4
	I0229 19:11:22.268577    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:22.268577    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:22.268669    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:22.268669    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:22.268669    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:22 GMT
	I0229 19:11:22.269359    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:22.269951    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:22.269951    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:22.269951    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:22.269951    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:22.273216    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:22.273216    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:22.273216    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:22.273216    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:22 GMT
	I0229 19:11:22.273216    6464 round_trippers.go:580]     Audit-Id: 7af0916c-737b-4a58-9f52-9946f174936e
	I0229 19:11:22.273216    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:22.273216    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:22.273216    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:22.273791    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:22.750461    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:22.750554    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:22.750554    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:22.750554    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:22.757402    6464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 19:11:22.757804    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:22.757804    6464 round_trippers.go:580]     Audit-Id: 253c52ca-cfa7-4cc6-9ede-76c7c35b1e47
	I0229 19:11:22.757804    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:22.757804    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:22.757804    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:22.757804    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:22.757804    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:22 GMT
	I0229 19:11:22.758237    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:22.758808    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:22.759009    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:22.759009    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:22.759009    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:22.765669    6464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 19:11:22.765669    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:22.765669    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:22 GMT
	I0229 19:11:22.765669    6464 round_trippers.go:580]     Audit-Id: ed6ea68f-782e-4160-90db-b56c7ec1d38c
	I0229 19:11:22.765669    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:22.765669    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:22.765669    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:22.765669    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:22.765669    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:22.765669    6464 pod_ready.go:102] pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace has status "Ready":"False"
	I0229 19:11:23.257658    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:23.257658    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:23.257658    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:23.257658    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:23.261235    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:23.262056    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:23.262056    6464 round_trippers.go:580]     Audit-Id: f177a22a-496b-49c3-81ff-da477135dde3
	I0229 19:11:23.262056    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:23.262056    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:23.262056    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:23.262056    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:23.262056    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:23 GMT
	I0229 19:11:23.262274    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:23.263082    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:23.263082    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:23.263082    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:23.263082    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:23.265861    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:11:23.266441    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:23.266441    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:23.266441    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:23.266441    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:23 GMT
	I0229 19:11:23.266441    6464 round_trippers.go:580]     Audit-Id: 12746ba2-7fd6-42bb-99a4-129735f885b6
	I0229 19:11:23.266441    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:23.266441    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:23.266866    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:23.762402    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:23.762402    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:23.762402    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:23.762402    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:23.769210    6464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 19:11:23.769210    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:23.769210    6464 round_trippers.go:580]     Audit-Id: 34070429-edbc-4b56-bdd0-a063fd868819
	I0229 19:11:23.769210    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:23.769210    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:23.769210    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:23.769210    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:23.769210    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:23 GMT
	I0229 19:11:23.769923    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:23.769923    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:23.770621    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:23.770621    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:23.770621    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:23.774090    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:23.774090    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:23.774090    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:23 GMT
	I0229 19:11:23.774090    6464 round_trippers.go:580]     Audit-Id: 45bc1bf0-7d31-484c-ba15-fbe59b298872
	I0229 19:11:23.774090    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:23.774090    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:23.774090    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:23.774090    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:23.774090    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:24.262401    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:24.262401    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:24.262401    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:24.262401    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:24.266568    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:24.266568    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:24.266568    6464 round_trippers.go:580]     Audit-Id: 4ed9e680-1e13-4b5b-acaa-4596e14e1b82
	I0229 19:11:24.266568    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:24.266568    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:24.266568    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:24.266568    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:24.266568    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:24 GMT
	I0229 19:11:24.267144    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:24.267825    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:24.267825    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:24.267825    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:24.267825    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:24.270009    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:11:24.270009    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:24.270009    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:24.270009    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:24.270009    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:24.270009    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:24 GMT
	I0229 19:11:24.270009    6464 round_trippers.go:580]     Audit-Id: c4494c10-6929-4a1f-80e4-4406dda7610b
	I0229 19:11:24.270009    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:24.271566    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:24.761027    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:24.761027    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:24.761114    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:24.761114    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:24.764470    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:24.764470    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:24.764470    6464 round_trippers.go:580]     Audit-Id: aa16bcb5-b090-4c3c-94ed-ea1b07a43381
	I0229 19:11:24.764470    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:24.764470    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:24.765132    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:24.765132    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:24.765132    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:24 GMT
	I0229 19:11:24.765462    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:24.766511    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:24.766582    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:24.766582    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:24.766653    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:24.770502    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:24.770502    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:24.770502    6464 round_trippers.go:580]     Audit-Id: 0a16a8bd-32c8-4190-ad9d-13b16de24309
	I0229 19:11:24.770502    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:24.770502    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:24.770502    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:24.770502    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:24.770502    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:24 GMT
	I0229 19:11:24.770719    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:24.771246    6464 pod_ready.go:102] pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace has status "Ready":"False"
	I0229 19:11:25.262498    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:25.262498    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:25.262498    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:25.262498    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:25.267397    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:25.267397    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:25.267397    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:25.267478    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:25.267478    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:25 GMT
	I0229 19:11:25.267478    6464 round_trippers.go:580]     Audit-Id: 5ecb2e10-2bde-4f85-ac78-cc965858e842
	I0229 19:11:25.267478    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:25.267478    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:25.267732    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:25.268394    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:25.268394    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:25.268468    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:25.268468    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:25.271494    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:25.271494    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:25.271494    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:25.271494    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:25 GMT
	I0229 19:11:25.271494    6464 round_trippers.go:580]     Audit-Id: 9c964cf8-db99-43e3-a184-e22679b70b15
	I0229 19:11:25.271494    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:25.271494    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:25.271494    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:25.271955    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:25.750473    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:25.750473    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:25.750473    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:25.750666    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:25.753898    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:25.753898    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:25.753898    6464 round_trippers.go:580]     Audit-Id: 5afd35fe-1bd4-4744-8d8f-55b4934b5b49
	I0229 19:11:25.753898    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:25.753898    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:25.753898    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:25.753898    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:25.753898    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:25 GMT
	I0229 19:11:25.754751    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:25.755391    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:25.755391    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:25.755468    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:25.755468    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:25.760959    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:11:25.760959    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:25.760959    6464 round_trippers.go:580]     Audit-Id: ad045a2a-f3c0-44d5-a7da-3442cbdf7828
	I0229 19:11:25.760959    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:25.760959    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:25.761039    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:25.761039    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:25.761039    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:25 GMT
	I0229 19:11:25.761039    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:26.265024    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:26.265024    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:26.265024    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:26.265024    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:26.269359    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:26.269359    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:26.269359    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:26.269359    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:26 GMT
	I0229 19:11:26.269359    6464 round_trippers.go:580]     Audit-Id: 681c3f38-b44f-4a7d-aca7-658f1a5a4661
	I0229 19:11:26.269359    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:26.269359    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:26.269359    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:26.269861    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:26.270495    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:26.270560    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:26.270592    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:26.270592    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:26.277384    6464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 19:11:26.277384    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:26.277384    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:26.277384    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:26 GMT
	I0229 19:11:26.277384    6464 round_trippers.go:580]     Audit-Id: d68b3e7d-5fe4-491a-a23b-f82a65fb12c3
	I0229 19:11:26.277384    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:26.277384    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:26.277463    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:26.277715    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:26.752100    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:26.752173    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:26.752173    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:26.752173    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:26.755703    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:26.755703    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:26.755703    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:26 GMT
	I0229 19:11:26.755703    6464 round_trippers.go:580]     Audit-Id: e740b850-5d23-402b-9451-44fd837336bb
	I0229 19:11:26.755703    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:26.755703    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:26.755703    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:26.755703    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:26.756495    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:26.757286    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:26.757286    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:26.757286    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:26.757286    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:26.767703    6464 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0229 19:11:26.768139    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:26.768139    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:26 GMT
	I0229 19:11:26.768139    6464 round_trippers.go:580]     Audit-Id: 508720cb-66ab-4045-bfd6-3a4260b4535a
	I0229 19:11:26.768139    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:26.768139    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:26.768139    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:26.768139    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:26.768353    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:27.261354    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:27.261354    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:27.261354    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:27.261438    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:27.264923    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:27.264923    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:27.264923    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:27.264923    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:27.265928    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:27 GMT
	I0229 19:11:27.265928    6464 round_trippers.go:580]     Audit-Id: 1afe1024-7160-41c7-8704-f8665c219622
	I0229 19:11:27.265952    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:27.265952    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:27.266043    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1569","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6545 chars]
	I0229 19:11:27.267148    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:27.267216    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:27.267216    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:27.267216    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:27.274672    6464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 19:11:27.274672    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:27.274672    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:27.274672    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:27.274672    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:27.274672    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:27.274672    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:27 GMT
	I0229 19:11:27.274672    6464 round_trippers.go:580]     Audit-Id: 2e814f2b-6b72-48d4-8ad0-ad7658e059e7
	I0229 19:11:27.274672    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:27.275719    6464 pod_ready.go:102] pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace has status "Ready":"False"
	I0229 19:11:27.755473    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:27.755473    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:27.755473    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:27.755473    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:27.759028    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:27.759028    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:27.759028    6464 round_trippers.go:580]     Audit-Id: f85c3a80-1102-4ad9-92ce-85d61e18d0a9
	I0229 19:11:27.759028    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:27.759028    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:27.759028    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:27.759028    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:27.759028    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:27 GMT
	I0229 19:11:27.760064    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1679","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6722 chars]
	I0229 19:11:27.760739    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:27.760739    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:27.760824    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:27.760824    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:27.764047    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:27.764047    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:27.764047    6464 round_trippers.go:580]     Audit-Id: 5e232040-39c9-46d7-8df1-07867ffa48d0
	I0229 19:11:27.764047    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:27.764047    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:27.764047    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:27.764047    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:27.764047    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:27 GMT
	I0229 19:11:27.764783    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:28.255810    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:28.255810    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:28.255810    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:28.255810    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:28.259098    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:28.260143    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:28.260143    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:28.260143    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:28.260143    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:28.260143    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:28 GMT
	I0229 19:11:28.260143    6464 round_trippers.go:580]     Audit-Id: ec04964f-28e7-42f3-9c39-1e1f157cbb74
	I0229 19:11:28.260143    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:28.260633    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1679","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6722 chars]
	I0229 19:11:28.260831    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:28.260831    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:28.260831    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:28.260831    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:28.264580    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:28.264722    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:28.264722    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:28 GMT
	I0229 19:11:28.264722    6464 round_trippers.go:580]     Audit-Id: 7b2f2ed7-0b83-46a1-9693-26649c33e366
	I0229 19:11:28.264722    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:28.264722    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:28.264722    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:28.264722    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:28.264722    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:28.760904    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:11:28.761016    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:28.761016    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:28.761016    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:28.766280    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:11:28.766280    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:28.766280    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:28.766280    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:28 GMT
	I0229 19:11:28.766280    6464 round_trippers.go:580]     Audit-Id: 7e84519f-9e6f-45bb-9bb5-8ca0a44a8e4c
	I0229 19:11:28.766280    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:28.766280    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:28.766280    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:28.767563    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1685","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0229 19:11:28.768102    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:28.768102    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:28.768102    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:28.768102    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:28.772139    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:28.772139    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:28.772139    6464 round_trippers.go:580]     Audit-Id: 568756be-cdb1-4a62-8579-bda0215c8bd1
	I0229 19:11:28.772534    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:28.772534    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:28.772534    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:28.772579    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:28.772579    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:28 GMT
	I0229 19:11:28.772607    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:28.773453    6464 pod_ready.go:92] pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace has status "Ready":"True"
	I0229 19:11:28.773483    6464 pod_ready.go:81] duration metric: took 8.0235316s waiting for pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:28.773554    6464 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:28.773724    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-421600
	I0229 19:11:28.773802    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:28.773832    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:28.773875    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:28.776583    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:11:28.776583    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:28.776583    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:28 GMT
	I0229 19:11:28.776583    6464 round_trippers.go:580]     Audit-Id: a95d32c4-4d7d-48de-b6a0-dfec71957333
	I0229 19:11:28.776583    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:28.776583    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:28.776583    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:28.776583    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:28.776583    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-421600","namespace":"kube-system","uid":"a57a6b03-e79b-4fcd-8750-480d46e6feb7","resourceVersion":"1655","creationTimestamp":"2024-02-29T19:11:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.109:2379","kubernetes.io/config.hash":"ffd246c3f34c2bcd65e63e05d5465206","kubernetes.io/config.mirror":"ffd246c3f34c2bcd65e63e05d5465206","kubernetes.io/config.seen":"2024-02-29T19:11:04.922860790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:11:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0229 19:11:28.778243    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:28.778243    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:28.778243    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:28.778243    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:28.780426    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:11:28.780426    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:28.780426    6464 round_trippers.go:580]     Audit-Id: 473bfa57-0ed4-4131-bb13-f6c5e9d6fb7f
	I0229 19:11:28.780426    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:28.780426    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:28.780426    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:28.780426    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:28.780426    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:28 GMT
	I0229 19:11:28.781407    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:28.781407    6464 pod_ready.go:92] pod "etcd-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 19:11:28.781407    6464 pod_ready.go:81] duration metric: took 7.7899ms waiting for pod "etcd-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:28.781407    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:28.781934    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-421600
	I0229 19:11:28.781934    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:28.781934    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:28.782019    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:28.784980    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:11:28.784980    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:28.784980    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:28.784980    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:28.785358    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:28.785358    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:28.785358    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:28 GMT
	I0229 19:11:28.785358    6464 round_trippers.go:580]     Audit-Id: 8c2d6066-4b80-4492-a3fb-4a02d4eec3fc
	I0229 19:11:28.785435    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-421600","namespace":"kube-system","uid":"456b1ada-afd0-416c-a95f-71bea88e161d","resourceVersion":"1658","creationTimestamp":"2024-02-29T19:11:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.52.109:8443","kubernetes.io/config.hash":"aec335819ecb0b3c60068e2ed02eb80d","kubernetes.io/config.mirror":"aec335819ecb0b3c60068e2ed02eb80d","kubernetes.io/config.seen":"2024-02-29T19:11:04.922862090Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:11:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0229 19:11:28.785435    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:28.785995    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:28.786038    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:28.786038    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:28.788053    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:11:28.788836    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:28.788836    6464 round_trippers.go:580]     Audit-Id: 0081c7b0-dc23-4905-984b-23a4a5e4a630
	I0229 19:11:28.788836    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:28.788836    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:28.788836    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:28.788836    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:28.788836    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:28 GMT
	I0229 19:11:28.789102    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:28.789471    6464 pod_ready.go:92] pod "kube-apiserver-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 19:11:28.789471    6464 pod_ready.go:81] duration metric: took 8.0639ms waiting for pod "kube-apiserver-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:28.789471    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:28.789471    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-421600
	I0229 19:11:28.789471    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:28.789471    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:28.789471    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:28.792593    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:28.792804    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:28.792871    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:28.792871    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:28.792871    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:28 GMT
	I0229 19:11:28.792871    6464 round_trippers.go:580]     Audit-Id: f97f69a8-e456-48f2-b1c0-850c24382bba
	I0229 19:11:28.792871    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:28.792871    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:28.793124    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-421600","namespace":"kube-system","uid":"a41ee888-f6df-43d4-9799-67a9ef0b6c87","resourceVersion":"1646","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dabef371df5cd2a8b883d06621dfc6bd","kubernetes.io/config.mirror":"dabef371df5cd2a8b883d06621dfc6bd","kubernetes.io/config.seen":"2024-02-29T18:50:38.626332146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0229 19:11:28.793608    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:28.793672    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:28.793672    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:28.793672    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:28.798796    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:11:28.798796    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:28.798796    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:28.798796    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:28.798796    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:28.798796    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:28 GMT
	I0229 19:11:28.798796    6464 round_trippers.go:580]     Audit-Id: 2161b2dc-de3e-487d-bb4c-14383b04276e
	I0229 19:11:28.798796    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:28.799382    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:28.799382    6464 pod_ready.go:92] pod "kube-controller-manager-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 19:11:28.799911    6464 pod_ready.go:81] duration metric: took 10.4394ms waiting for pod "kube-controller-manager-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:28.799911    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7c7xc" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:28.799978    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7c7xc
	I0229 19:11:28.799978    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:28.799978    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:28.799978    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:28.802243    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:11:28.802243    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:28.802243    6464 round_trippers.go:580]     Audit-Id: eb6c5232-6400-46e6-b7ab-53502a7248c6
	I0229 19:11:28.802243    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:28.802243    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:28.802243    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:28.802243    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:28.802243    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:28 GMT
	I0229 19:11:28.803164    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7c7xc","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f8e7fe9-d8e5-47ca-80fd-7e5f7ae43140","resourceVersion":"579","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0229 19:11:28.803469    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:11:28.803469    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:28.803469    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:28.803469    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:28.806071    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:11:28.806071    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:28.806071    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:28.806071    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:28 GMT
	I0229 19:11:28.806071    6464 round_trippers.go:580]     Audit-Id: 00a50932-506b-4ca9-8236-39d27e8b142c
	I0229 19:11:28.806071    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:28.806071    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:28.806071    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:28.806481    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6","resourceVersion":"1486","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_07_16_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager": [truncated 3818 chars]
	I0229 19:11:28.806789    6464 pod_ready.go:92] pod "kube-proxy-7c7xc" in "kube-system" namespace has status "Ready":"True"
	I0229 19:11:28.806853    6464 pod_ready.go:81] duration metric: took 6.9418ms waiting for pod "kube-proxy-7c7xc" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:28.806853    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpk6m" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:28.964463    6464 request.go:629] Waited for 157.5297ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpk6m
	I0229 19:11:28.964463    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpk6m
	I0229 19:11:28.964463    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:28.964463    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:28.964463    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:28.969121    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:28.969121    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:28.969121    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:28.969245    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:28.969245    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:28.969245    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:28.969245    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:29 GMT
	I0229 19:11:28.969245    6464 round_trippers.go:580]     Audit-Id: 22db715d-d58b-4440-9da0-0649447dc598
	I0229 19:11:28.969307    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fpk6m","generateName":"kube-proxy-","namespace":"kube-system","uid":"4c99c6ec-5ab0-434d-b5a9-cb24b10f8bbf","resourceVersion":"1574","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0229 19:11:29.169945    6464 request.go:629] Waited for 200.0383ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:29.170399    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:29.170399    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:29.170399    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:29.170399    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:29.174050    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:29.174458    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:29.174458    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:29.174458    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:29.174458    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:29.174458    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:29 GMT
	I0229 19:11:29.174458    6464 round_trippers.go:580]     Audit-Id: 47bd2429-b625-49c9-841f-e34a252965fa
	I0229 19:11:29.174458    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:29.174649    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:29.175039    6464 pod_ready.go:92] pod "kube-proxy-fpk6m" in "kube-system" namespace has status "Ready":"True"
	I0229 19:11:29.175134    6464 pod_ready.go:81] duration metric: took 368.1656ms waiting for pod "kube-proxy-fpk6m" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:29.175134    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rhg8l" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:29.374251    6464 request.go:629] Waited for 198.8178ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rhg8l
	I0229 19:11:29.374715    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rhg8l
	I0229 19:11:29.374715    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:29.374826    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:29.374826    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:29.379181    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:29.379181    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:29.379181    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:29.379181    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:29.379181    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:29.379181    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:29 GMT
	I0229 19:11:29.379181    6464 round_trippers.go:580]     Audit-Id: 83a06508-b033-41b9-8a88-6e29d9fc3d35
	I0229 19:11:29.379181    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:29.379535    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rhg8l","generateName":"kube-proxy-","namespace":"kube-system","uid":"58dfdc35-3e50-486d-b7a7-5bae65934cd5","resourceVersion":"1488","creationTimestamp":"2024-02-29T18:57:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:57:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0229 19:11:29.562015    6464 request.go:629] Waited for 181.8685ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:11:29.562543    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:11:29.562543    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:29.562543    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:29.562543    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:29.568957    6464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 19:11:29.568957    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:29.568957    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:29.568957    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:29.568957    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:29 GMT
	I0229 19:11:29.568957    6464 round_trippers.go:580]     Audit-Id: 699849cb-5784-45e2-bb8e-9056b31c7df7
	I0229 19:11:29.568957    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:29.568957    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:29.570212    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"096122b8-0719-4361-9b63-57130df92d29","resourceVersion":"1501","creationTimestamp":"2024-02-29T19:07:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_07_16_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3635 chars]
	I0229 19:11:29.570461    6464 pod_ready.go:92] pod "kube-proxy-rhg8l" in "kube-system" namespace has status "Ready":"True"
	I0229 19:11:29.570461    6464 pod_ready.go:81] duration metric: took 395.3052ms waiting for pod "kube-proxy-rhg8l" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:29.570461    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:29.763163    6464 request.go:629] Waited for 192.6904ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-421600
	I0229 19:11:29.763163    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-421600
	I0229 19:11:29.763163    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:29.763807    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:29.763807    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:29.767301    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:29.768313    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:29.768313    6464 round_trippers.go:580]     Audit-Id: 21b61526-1b58-4fc8-bd37-2359352a9bac
	I0229 19:11:29.768313    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:29.768313    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:29.768313    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:29.768313    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:29.768313    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:29 GMT
	I0229 19:11:29.768663    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-421600","namespace":"kube-system","uid":"6742b97c-a3db-4fca-8da3-54fcde6d405a","resourceVersion":"1669","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a2c94c0a4c322f0bf7fcafad0430344f","kubernetes.io/config.mirror":"a2c94c0a4c322f0bf7fcafad0430344f","kubernetes.io/config.seen":"2024-02-29T18:50:38.626333146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0229 19:11:29.967196    6464 request.go:629] Waited for 197.7426ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:29.967480    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:11:29.967480    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:29.967480    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:29.967480    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:29.971356    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:29.971356    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:29.971356    6464 round_trippers.go:580]     Audit-Id: 3a45cbb2-c79d-4f30-b714-e2631414ddea
	I0229 19:11:29.971356    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:29.971356    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:29.971777    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:29.971777    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:29.971777    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:30 GMT
	I0229 19:11:29.972045    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:11:29.972045    6464 pod_ready.go:92] pod "kube-scheduler-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 19:11:29.972045    6464 pod_ready.go:81] duration metric: took 401.5611ms waiting for pod "kube-scheduler-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:11:29.972045    6464 pod_ready.go:38] duration metric: took 9.2386934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:11:29.972045    6464 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:11:29.981036    6464 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:11:30.004261    6464 command_runner.go:130] > 1892
	I0229 19:11:30.005220    6464 api_server.go:72] duration metric: took 14.3830381s to wait for apiserver process to appear ...
	I0229 19:11:30.005250    6464 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:11:30.005303    6464 api_server.go:253] Checking apiserver healthz at https://172.26.52.109:8443/healthz ...
	I0229 19:11:30.014850    6464 api_server.go:279] https://172.26.52.109:8443/healthz returned 200:
	ok
	I0229 19:11:30.015713    6464 round_trippers.go:463] GET https://172.26.52.109:8443/version
	I0229 19:11:30.015713    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:30.015713    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:30.015713    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:30.017419    6464 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0229 19:11:30.017419    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:30.017419    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:30.017419    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:30.017419    6464 round_trippers.go:580]     Content-Length: 264
	I0229 19:11:30.018386    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:30 GMT
	I0229 19:11:30.018386    6464 round_trippers.go:580]     Audit-Id: ab86c180-2c62-454e-851b-a58f336fa27b
	I0229 19:11:30.018386    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:30.018429    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:30.018429    6464 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0229 19:11:30.018632    6464 api_server.go:141] control plane version: v1.28.4
	I0229 19:11:30.018632    6464 api_server.go:131] duration metric: took 13.3821ms to wait for apiserver health ...
	I0229 19:11:30.018707    6464 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:11:30.169521    6464 request.go:629] Waited for 150.6574ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods
	I0229 19:11:30.169887    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods
	I0229 19:11:30.169887    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:30.169887    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:30.169887    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:30.175022    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:11:30.175022    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:30.175022    6464 round_trippers.go:580]     Audit-Id: 664c9928-7f2e-4be1-9ecc-50dfd9f70ca9
	I0229 19:11:30.175022    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:30.175022    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:30.175022    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:30.175633    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:30.175633    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:30 GMT
	I0229 19:11:30.176916    6464 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1689"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1685","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82499 chars]
	I0229 19:11:30.180503    6464 system_pods.go:59] 12 kube-system pods found
	I0229 19:11:30.180503    6464 system_pods.go:61] "coredns-5dd5756b68-5qhb2" [cb647b50-f478-4265-9ff1-b66190c46393] Running
	I0229 19:11:30.180503    6464 system_pods.go:61] "etcd-multinode-421600" [a57a6b03-e79b-4fcd-8750-480d46e6feb7] Running
	I0229 19:11:30.180503    6464 system_pods.go:61] "kindnet-447dh" [c2052338-6892-465a-b1d4-c4247c9ac2a0] Running
	I0229 19:11:30.180503    6464 system_pods.go:61] "kindnet-7nzdd" [0ddba541-4eca-46f3-a45a-35433dcefe6c] Running
	I0229 19:11:30.180503    6464 system_pods.go:61] "kindnet-zblbg" [1ea7f301-b0fb-4708-85d2-d1256cdda09c] Running
	I0229 19:11:30.180503    6464 system_pods.go:61] "kube-apiserver-multinode-421600" [456b1ada-afd0-416c-a95f-71bea88e161d] Running
	I0229 19:11:30.180503    6464 system_pods.go:61] "kube-controller-manager-multinode-421600" [a41ee888-f6df-43d4-9799-67a9ef0b6c87] Running
	I0229 19:11:30.180503    6464 system_pods.go:61] "kube-proxy-7c7xc" [6f8e7fe9-d8e5-47ca-80fd-7e5f7ae43140] Running
	I0229 19:11:30.180503    6464 system_pods.go:61] "kube-proxy-fpk6m" [4c99c6ec-5ab0-434d-b5a9-cb24b10f8bbf] Running
	I0229 19:11:30.180503    6464 system_pods.go:61] "kube-proxy-rhg8l" [58dfdc35-3e50-486d-b7a7-5bae65934cd5] Running
	I0229 19:11:30.180503    6464 system_pods.go:61] "kube-scheduler-multinode-421600" [6742b97c-a3db-4fca-8da3-54fcde6d405a] Running
	I0229 19:11:30.180503    6464 system_pods.go:61] "storage-provisioner" [98ad07fa-8673-4933-9197-b7ceb8a3afbc] Running
	I0229 19:11:30.180503    6464 system_pods.go:74] duration metric: took 161.7059ms to wait for pod list to return data ...
	I0229 19:11:30.180503    6464 default_sa.go:34] waiting for default service account to be created ...
	I0229 19:11:30.373005    6464 request.go:629] Waited for 192.2835ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/default/serviceaccounts
	I0229 19:11:30.373005    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/default/serviceaccounts
	I0229 19:11:30.373005    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:30.373005    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:30.373005    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:30.376645    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:11:30.377238    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:30.377238    6464 round_trippers.go:580]     Audit-Id: fe132d09-ef10-4760-8f45-7cd3fddb632f
	I0229 19:11:30.377238    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:30.377314    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:30.377314    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:30.377314    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:30.377314    6464 round_trippers.go:580]     Content-Length: 262
	I0229 19:11:30.377314    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:30 GMT
	I0229 19:11:30.377314    6464 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1689"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"3e667406-4272-4c92-bf6d-ce7b6f584082","resourceVersion":"302","creationTimestamp":"2024-02-29T18:50:50Z"}}]}
	I0229 19:11:30.377697    6464 default_sa.go:45] found service account: "default"
	I0229 19:11:30.377769    6464 default_sa.go:55] duration metric: took 197.2552ms for default service account to be created ...
	I0229 19:11:30.377769    6464 system_pods.go:116] waiting for k8s-apps to be running ...
	I0229 19:11:30.562481    6464 request.go:629] Waited for 184.2714ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods
	I0229 19:11:30.562481    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods
	I0229 19:11:30.562481    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:30.562481    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:30.562481    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:30.570832    6464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 19:11:30.571614    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:30.571614    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:30.571614    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:30.571614    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:30 GMT
	I0229 19:11:30.571701    6464 round_trippers.go:580]     Audit-Id: 57c4c56a-a824-4d97-8eb1-30b50a92e7e4
	I0229 19:11:30.571701    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:30.571701    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:30.572845    6464 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1689"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1685","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82499 chars]
	I0229 19:11:30.576450    6464 system_pods.go:86] 12 kube-system pods found
	I0229 19:11:30.576513    6464 system_pods.go:89] "coredns-5dd5756b68-5qhb2" [cb647b50-f478-4265-9ff1-b66190c46393] Running
	I0229 19:11:30.576513    6464 system_pods.go:89] "etcd-multinode-421600" [a57a6b03-e79b-4fcd-8750-480d46e6feb7] Running
	I0229 19:11:30.576513    6464 system_pods.go:89] "kindnet-447dh" [c2052338-6892-465a-b1d4-c4247c9ac2a0] Running
	I0229 19:11:30.576513    6464 system_pods.go:89] "kindnet-7nzdd" [0ddba541-4eca-46f3-a45a-35433dcefe6c] Running
	I0229 19:11:30.576513    6464 system_pods.go:89] "kindnet-zblbg" [1ea7f301-b0fb-4708-85d2-d1256cdda09c] Running
	I0229 19:11:30.576513    6464 system_pods.go:89] "kube-apiserver-multinode-421600" [456b1ada-afd0-416c-a95f-71bea88e161d] Running
	I0229 19:11:30.576513    6464 system_pods.go:89] "kube-controller-manager-multinode-421600" [a41ee888-f6df-43d4-9799-67a9ef0b6c87] Running
	I0229 19:11:30.576513    6464 system_pods.go:89] "kube-proxy-7c7xc" [6f8e7fe9-d8e5-47ca-80fd-7e5f7ae43140] Running
	I0229 19:11:30.576513    6464 system_pods.go:89] "kube-proxy-fpk6m" [4c99c6ec-5ab0-434d-b5a9-cb24b10f8bbf] Running
	I0229 19:11:30.576513    6464 system_pods.go:89] "kube-proxy-rhg8l" [58dfdc35-3e50-486d-b7a7-5bae65934cd5] Running
	I0229 19:11:30.576513    6464 system_pods.go:89] "kube-scheduler-multinode-421600" [6742b97c-a3db-4fca-8da3-54fcde6d405a] Running
	I0229 19:11:30.576513    6464 system_pods.go:89] "storage-provisioner" [98ad07fa-8673-4933-9197-b7ceb8a3afbc] Running
	I0229 19:11:30.576513    6464 system_pods.go:126] duration metric: took 198.7328ms to wait for k8s-apps to be running ...
	I0229 19:11:30.576513    6464 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:11:30.584797    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:11:30.609150    6464 system_svc.go:56] duration metric: took 32.6353ms WaitForService to wait for kubelet.
	I0229 19:11:30.609150    6464 kubeadm.go:581] duration metric: took 14.9878929s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:11:30.609150    6464 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:11:30.766062    6464 request.go:629] Waited for 156.7096ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes
	I0229 19:11:30.766305    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes
	I0229 19:11:30.766305    6464 round_trippers.go:469] Request Headers:
	I0229 19:11:30.766305    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:11:30.766305    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:11:30.770878    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:11:30.770878    6464 round_trippers.go:577] Response Headers:
	I0229 19:11:30.770878    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:11:30.770878    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:11:30.770878    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:11:30.770878    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:11:30.770878    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:11:30 GMT
	I0229 19:11:30.770878    6464 round_trippers.go:580]     Audit-Id: df5f5d3c-fbdb-4fd9-8722-0c19c4f04bce
	I0229 19:11:30.771426    6464 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1689"},"items":[{"metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14728 chars]
	I0229 19:11:30.772238    6464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:11:30.772299    6464 node_conditions.go:123] node cpu capacity is 2
	I0229 19:11:30.772299    6464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:11:30.772299    6464 node_conditions.go:123] node cpu capacity is 2
	I0229 19:11:30.772299    6464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:11:30.772299    6464 node_conditions.go:123] node cpu capacity is 2
	I0229 19:11:30.772299    6464 node_conditions.go:105] duration metric: took 163.1406ms to run NodePressure ...
	I0229 19:11:30.772299    6464 start.go:228] waiting for startup goroutines ...
	I0229 19:11:30.772382    6464 start.go:233] waiting for cluster config update ...
	I0229 19:11:30.772382    6464 start.go:242] writing updated cluster config ...
	I0229 19:11:30.788488    6464 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:11:30.789237    6464 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\config.json ...
	I0229 19:11:30.792944    6464 out.go:177] * Starting worker node multinode-421600-m02 in cluster multinode-421600
	I0229 19:11:30.793579    6464 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 19:11:30.793657    6464 cache.go:56] Caching tarball of preloaded images
	I0229 19:11:30.793931    6464 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 19:11:30.793931    6464 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 19:11:30.793931    6464 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\config.json ...
	I0229 19:11:30.795819    6464 start.go:365] acquiring machines lock for multinode-421600-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 19:11:30.796409    6464 start.go:369] acquired machines lock for "multinode-421600-m02" in 589.3µs
	I0229 19:11:30.796537    6464 start.go:96] Skipping create...Using existing machine configuration
	I0229 19:11:30.796537    6464 fix.go:54] fixHost starting: m02
	I0229 19:11:30.796537    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:11:32.784155    6464 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 19:11:32.784305    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:11:32.784305    6464 fix.go:102] recreateIfNeeded on multinode-421600-m02: state=Stopped err=<nil>
	W0229 19:11:32.784305    6464 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 19:11:32.784968    6464 out.go:177] * Restarting existing hyperv VM for "multinode-421600-m02" ...
	I0229 19:11:32.785642    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-421600-m02
	I0229 19:11:35.532937    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:11:35.532980    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:11:35.532980    6464 main.go:141] libmachine: Waiting for host to start...
	I0229 19:11:35.533036    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:11:37.608234    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:11:37.608234    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:11:37.609173    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:11:39.923121    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:11:39.923121    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:11:40.934885    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:11:42.923155    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:11:42.923433    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:11:42.923536    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:11:45.204859    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:11:45.204859    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:11:46.205677    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:11:48.232215    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:11:48.232215    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:11:48.232215    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:11:50.569400    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:11:50.569400    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:11:51.578933    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:11:53.607036    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:11:53.607372    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:11:53.607549    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:11:55.931938    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:11:55.931938    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:11:56.947562    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:11:58.959484    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:11:58.959484    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:11:58.960034    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:12:01.367429    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:12:01.367429    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:01.369554    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:12:03.364903    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:12:03.364903    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:03.364903    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:12:05.761994    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:12:05.761994    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:05.761994    6464 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\config.json ...
	I0229 19:12:05.764730    6464 machine.go:88] provisioning docker machine ...
	I0229 19:12:05.764730    6464 buildroot.go:166] provisioning hostname "multinode-421600-m02"
	I0229 19:12:05.764730    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:12:07.765146    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:12:07.765146    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:07.765146    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:12:10.112097    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:12:10.112097    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:10.117901    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:12:10.118004    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.62.204 22 <nil> <nil>}
	I0229 19:12:10.118004    6464 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-421600-m02 && echo "multinode-421600-m02" | sudo tee /etc/hostname
	I0229 19:12:10.290078    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-421600-m02
	
	I0229 19:12:10.290078    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:12:12.284946    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:12:12.284946    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:12.285741    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:12:14.691491    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:12:14.691529    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:14.695647    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:12:14.696095    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.62.204 22 <nil> <nil>}
	I0229 19:12:14.696183    6464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-421600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-421600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-421600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 19:12:14.850111    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:12:14.850111    6464 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 19:12:14.850201    6464 buildroot.go:174] setting up certificates
	I0229 19:12:14.850201    6464 provision.go:83] configureAuth start
	I0229 19:12:14.850318    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:12:16.852734    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:12:16.852734    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:16.853688    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:12:19.252176    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:12:19.252176    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:19.252176    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:12:21.258312    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:12:21.258312    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:21.258312    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:12:23.670100    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:12:23.670445    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:23.670445    6464 provision.go:138] copyHostCerts
	I0229 19:12:23.670638    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 19:12:23.670763    6464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 19:12:23.670763    6464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 19:12:23.670763    6464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 19:12:23.671816    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 19:12:23.671993    6464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 19:12:23.671993    6464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 19:12:23.672191    6464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 19:12:23.673036    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 19:12:23.673146    6464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 19:12:23.673226    6464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 19:12:23.673480    6464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 19:12:23.674243    6464 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-421600-m02 san=[172.26.62.204 172.26.62.204 localhost 127.0.0.1 minikube multinode-421600-m02]
	I0229 19:12:23.870633    6464 provision.go:172] copyRemoteCerts
	I0229 19:12:23.878640    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 19:12:23.878640    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:12:25.860690    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:12:25.860690    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:25.860690    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:12:28.227539    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:12:28.227539    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:28.227761    6464 sshutil.go:53] new ssh client: &{IP:172.26.62.204 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\id_rsa Username:docker}
	I0229 19:12:28.339719    6464 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4607654s)
	I0229 19:12:28.339719    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 19:12:28.340191    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0229 19:12:28.386836    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 19:12:28.387371    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 19:12:28.439037    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 19:12:28.439516    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 19:12:28.487497    6464 provision.go:86] duration metric: configureAuth took 13.6365389s
	I0229 19:12:28.487497    6464 buildroot.go:189] setting minikube options for container-runtime
	I0229 19:12:28.488190    6464 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:12:28.488270    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:12:30.459585    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:12:30.460445    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:30.460445    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:12:32.819605    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:12:32.819762    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:32.824615    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:12:32.825240    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.62.204 22 <nil> <nil>}
	I0229 19:12:32.825240    6464 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 19:12:32.964057    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 19:12:32.964201    6464 buildroot.go:70] root file system type: tmpfs
	I0229 19:12:32.964571    6464 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 19:12:32.964669    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:12:34.943992    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:12:34.944820    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:34.944902    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:12:37.323483    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:12:37.323483    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:37.328335    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:12:37.328860    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.62.204 22 <nil> <nil>}
	I0229 19:12:37.328958    6464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.26.52.109"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 19:12:37.507513    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.26.52.109
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 19:12:37.507513    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:12:39.457615    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:12:39.457684    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:39.457684    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:12:41.838892    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:12:41.838892    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:41.845955    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:12:41.846505    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.62.204 22 <nil> <nil>}
	I0229 19:12:41.846634    6464 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 19:12:43.026414    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 19:12:43.026414    6464 machine.go:91] provisioned docker machine in 37.2596164s
	I0229 19:12:43.026414    6464 start.go:300] post-start starting for "multinode-421600-m02" (driver="hyperv")
	I0229 19:12:43.026414    6464 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 19:12:43.039221    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 19:12:43.039221    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:12:44.979810    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:12:44.980173    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:44.980173    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:12:47.331865    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:12:47.331865    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:47.331865    6464 sshutil.go:53] new ssh client: &{IP:172.26.62.204 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\id_rsa Username:docker}
	I0229 19:12:47.444271    6464 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4048056s)
	I0229 19:12:47.456940    6464 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 19:12:47.464628    6464 command_runner.go:130] > NAME=Buildroot
	I0229 19:12:47.464628    6464 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 19:12:47.464628    6464 command_runner.go:130] > ID=buildroot
	I0229 19:12:47.464628    6464 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 19:12:47.464628    6464 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 19:12:47.464628    6464 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 19:12:47.464628    6464 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 19:12:47.465274    6464 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 19:12:47.465869    6464 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> 43562.pem in /etc/ssl/certs
	I0229 19:12:47.465938    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> /etc/ssl/certs/43562.pem
	I0229 19:12:47.479658    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 19:12:47.503202    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /etc/ssl/certs/43562.pem (1708 bytes)
	I0229 19:12:47.558856    6464 start.go:303] post-start completed in 4.5321901s
	I0229 19:12:47.558972    6464 fix.go:56] fixHost completed within 1m16.7581748s
	I0229 19:12:47.558972    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:12:49.539055    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:12:49.539055    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:49.539834    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:12:51.939497    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:12:51.939497    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:51.944316    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:12:51.944940    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.62.204 22 <nil> <nil>}
	I0229 19:12:51.944940    6464 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 19:12:52.083874    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709233972.252497775
	
	I0229 19:12:52.084112    6464 fix.go:206] guest clock: 1709233972.252497775
	I0229 19:12:52.084112    6464 fix.go:219] Guest: 2024-02-29 19:12:52.252497775 +0000 UTC Remote: 2024-02-29 19:12:47.5589723 +0000 UTC m=+211.912371701 (delta=4.693525475s)
	I0229 19:12:52.084112    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:12:54.104805    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:12:54.104805    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:54.104805    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:12:56.478163    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:12:56.478411    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:56.482388    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:12:56.483065    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.62.204 22 <nil> <nil>}
	I0229 19:12:56.483065    6464 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709233972
	I0229 19:12:56.630710    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 19:12:52 UTC 2024
	
	I0229 19:12:56.630766    6464 fix.go:226] clock set: Thu Feb 29 19:12:52 UTC 2024
	 (err=<nil>)
	I0229 19:12:56.630766    6464 start.go:83] releasing machines lock for "multinode-421600-m02", held for 1m25.8295155s
	I0229 19:12:56.630990    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:12:58.631605    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:12:58.632036    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:12:58.632119    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:13:00.993634    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:13:00.993634    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:00.995307    6464 out.go:177] * Found network options:
	I0229 19:13:00.996249    6464 out.go:177]   - NO_PROXY=172.26.52.109
	W0229 19:13:00.996853    6464 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 19:13:00.997356    6464 out.go:177]   - NO_PROXY=172.26.52.109
	W0229 19:13:00.997866    6464 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 19:13:00.999312    6464 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 19:13:01.002178    6464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 19:13:01.002282    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:13:01.012417    6464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 19:13:01.012417    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:13:02.992061    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:13:02.992061    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:02.992156    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:13:03.014347    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:13:03.014347    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:03.014347    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:13:05.416222    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:13:05.416425    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:05.416751    6464 sshutil.go:53] new ssh client: &{IP:172.26.62.204 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\id_rsa Username:docker}
	I0229 19:13:05.440153    6464 main.go:141] libmachine: [stdout =====>] : 172.26.62.204
	
	I0229 19:13:05.440153    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:05.440603    6464 sshutil.go:53] new ssh client: &{IP:172.26.62.204 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\id_rsa Username:docker}
	I0229 19:13:05.583949    6464 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 19:13:05.584880    6464 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0229 19:13:05.584959    6464 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5722089s)
	I0229 19:13:05.584992    6464 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5825262s)
	W0229 19:13:05.584992    6464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 19:13:05.597546    6464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 19:13:05.627022    6464 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 19:13:05.627225    6464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 19:13:05.627300    6464 start.go:475] detecting cgroup driver to use...
	I0229 19:13:05.627300    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:13:05.661368    6464 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 19:13:05.672678    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 19:13:05.699585    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 19:13:05.718488    6464 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 19:13:05.730489    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 19:13:05.760583    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 19:13:05.790014    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 19:13:05.821474    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 19:13:05.853572    6464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 19:13:05.881921    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 19:13:05.910673    6464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 19:13:05.928878    6464 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 19:13:05.937565    6464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 19:13:05.965550    6464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:13:06.146304    6464 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 19:13:06.175774    6464 start.go:475] detecting cgroup driver to use...
	I0229 19:13:06.184613    6464 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 19:13:06.204494    6464 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 19:13:06.204494    6464 command_runner.go:130] > [Unit]
	I0229 19:13:06.204494    6464 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 19:13:06.204494    6464 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 19:13:06.204494    6464 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 19:13:06.204494    6464 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 19:13:06.204494    6464 command_runner.go:130] > StartLimitBurst=3
	I0229 19:13:06.204494    6464 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 19:13:06.204494    6464 command_runner.go:130] > [Service]
	I0229 19:13:06.204494    6464 command_runner.go:130] > Type=notify
	I0229 19:13:06.204494    6464 command_runner.go:130] > Restart=on-failure
	I0229 19:13:06.204494    6464 command_runner.go:130] > Environment=NO_PROXY=172.26.52.109
	I0229 19:13:06.204494    6464 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 19:13:06.204494    6464 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 19:13:06.204494    6464 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 19:13:06.204494    6464 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 19:13:06.204494    6464 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 19:13:06.204494    6464 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 19:13:06.204494    6464 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 19:13:06.204494    6464 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 19:13:06.204494    6464 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 19:13:06.204494    6464 command_runner.go:130] > ExecStart=
	I0229 19:13:06.204494    6464 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 19:13:06.204494    6464 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 19:13:06.204494    6464 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 19:13:06.204494    6464 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 19:13:06.204494    6464 command_runner.go:130] > LimitNOFILE=infinity
	I0229 19:13:06.204494    6464 command_runner.go:130] > LimitNPROC=infinity
	I0229 19:13:06.204494    6464 command_runner.go:130] > LimitCORE=infinity
	I0229 19:13:06.204494    6464 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 19:13:06.204494    6464 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 19:13:06.204494    6464 command_runner.go:130] > TasksMax=infinity
	I0229 19:13:06.204494    6464 command_runner.go:130] > TimeoutStartSec=0
	I0229 19:13:06.204494    6464 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 19:13:06.204494    6464 command_runner.go:130] > Delegate=yes
	I0229 19:13:06.204494    6464 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 19:13:06.204494    6464 command_runner.go:130] > KillMode=process
	I0229 19:13:06.204494    6464 command_runner.go:130] > [Install]
	I0229 19:13:06.204494    6464 command_runner.go:130] > WantedBy=multi-user.target
	I0229 19:13:06.213522    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 19:13:06.245471    6464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 19:13:06.284220    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 19:13:06.316421    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 19:13:06.348334    6464 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 19:13:06.399867    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 19:13:06.423842    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:13:06.460025    6464 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 19:13:06.470911    6464 ssh_runner.go:195] Run: which cri-dockerd
	I0229 19:13:06.477496    6464 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 19:13:06.485592    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 19:13:06.503053    6464 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 19:13:06.542660    6464 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 19:13:06.729662    6464 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 19:13:06.907498    6464 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 19:13:06.907598    6464 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 19:13:06.951939    6464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:13:07.145458    6464 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 19:13:08.718931    6464 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5732989s)
	I0229 19:13:08.730562    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 19:13:08.764815    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 19:13:08.798940    6464 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 19:13:08.988274    6464 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 19:13:09.179069    6464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:13:09.366874    6464 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 19:13:09.408220    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 19:13:09.442355    6464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:13:09.629779    6464 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 19:13:09.739482    6464 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 19:13:09.748088    6464 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 19:13:09.757397    6464 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 19:13:09.757522    6464 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 19:13:09.757522    6464 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0229 19:13:09.757522    6464 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 19:13:09.757522    6464 command_runner.go:130] > Access: 2024-02-29 19:13:09.824476429 +0000
	I0229 19:13:09.757522    6464 command_runner.go:130] > Modify: 2024-02-29 19:13:09.824476429 +0000
	I0229 19:13:09.757605    6464 command_runner.go:130] > Change: 2024-02-29 19:13:09.828476526 +0000
	I0229 19:13:09.757605    6464 command_runner.go:130] >  Birth: -
	I0229 19:13:09.757605    6464 start.go:543] Will wait 60s for crictl version
	I0229 19:13:09.767487    6464 ssh_runner.go:195] Run: which crictl
	I0229 19:13:09.774546    6464 command_runner.go:130] > /usr/bin/crictl
	I0229 19:13:09.785101    6464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 19:13:09.864976    6464 command_runner.go:130] > Version:  0.1.0
	I0229 19:13:09.864976    6464 command_runner.go:130] > RuntimeName:  docker
	I0229 19:13:09.864976    6464 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 19:13:09.865061    6464 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 19:13:09.865061    6464 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 19:13:09.871279    6464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 19:13:09.904888    6464 command_runner.go:130] > 24.0.7
	I0229 19:13:09.913440    6464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 19:13:09.945818    6464 command_runner.go:130] > 24.0.7
	I0229 19:13:09.947639    6464 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 19:13:09.948267    6464 out.go:177]   - env NO_PROXY=172.26.52.109
	I0229 19:13:09.948774    6464 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 19:13:09.952410    6464 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 19:13:09.952410    6464 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 19:13:09.952410    6464 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 19:13:09.952410    6464 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:76:3f:19 Flags:up|broadcast|multicast|running}
	I0229 19:13:09.954658    6464 ip.go:210] interface addr: fe80::841a:4367:8c9:abc/64
	I0229 19:13:09.954658    6464 ip.go:210] interface addr: 172.26.48.1/20
	I0229 19:13:09.962624    6464 ssh_runner.go:195] Run: grep 172.26.48.1	host.minikube.internal$ /etc/hosts
	I0229 19:13:09.969569    6464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:13:09.991018    6464 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600 for IP: 172.26.62.204
	I0229 19:13:09.991018    6464 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:13:09.991659    6464 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 19:13:09.991977    6464 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 19:13:09.992169    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 19:13:09.992373    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 19:13:09.992578    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 19:13:09.992684    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 19:13:09.993071    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem (1338 bytes)
	W0229 19:13:09.993245    6464 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356_empty.pem, impossibly tiny 0 bytes
	I0229 19:13:09.993330    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 19:13:09.993596    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 19:13:09.993846    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 19:13:09.994113    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 19:13:09.994457    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem (1708 bytes)
	I0229 19:13:09.994807    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> /usr/share/ca-certificates/43562.pem
	I0229 19:13:09.994977    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:13:09.995065    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem -> /usr/share/ca-certificates/4356.pem
	I0229 19:13:09.995734    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 19:13:10.043238    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 19:13:10.090333    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 19:13:10.139574    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 19:13:10.186760    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /usr/share/ca-certificates/43562.pem (1708 bytes)
	I0229 19:13:10.232315    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 19:13:10.281382    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem --> /usr/share/ca-certificates/4356.pem (1338 bytes)
	I0229 19:13:10.336363    6464 ssh_runner.go:195] Run: openssl version
	I0229 19:13:10.345980    6464 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 19:13:10.353775    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43562.pem && ln -fs /usr/share/ca-certificates/43562.pem /etc/ssl/certs/43562.pem"
	I0229 19:13:10.383450    6464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43562.pem
	I0229 19:13:10.391030    6464 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 17:55 /usr/share/ca-certificates/43562.pem
	I0229 19:13:10.391800    6464 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:55 /usr/share/ca-certificates/43562.pem
	I0229 19:13:10.400518    6464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43562.pem
	I0229 19:13:10.409881    6464 command_runner.go:130] > 3ec20f2e
	I0229 19:13:10.418553    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43562.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 19:13:10.449597    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 19:13:10.480896    6464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:13:10.490257    6464 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:13:10.490257    6464 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:13:10.499213    6464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:13:10.507769    6464 command_runner.go:130] > b5213941
	I0229 19:13:10.516263    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 19:13:10.545053    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4356.pem && ln -fs /usr/share/ca-certificates/4356.pem /etc/ssl/certs/4356.pem"
	I0229 19:13:10.574303    6464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4356.pem
	I0229 19:13:10.582460    6464 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 17:55 /usr/share/ca-certificates/4356.pem
	I0229 19:13:10.582630    6464 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:55 /usr/share/ca-certificates/4356.pem
	I0229 19:13:10.593683    6464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4356.pem
	I0229 19:13:10.602454    6464 command_runner.go:130] > 51391683
	I0229 19:13:10.611490    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4356.pem /etc/ssl/certs/51391683.0"
	I0229 19:13:10.643761    6464 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 19:13:10.650083    6464 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 19:13:10.650209    6464 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 19:13:10.657144    6464 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 19:13:10.690242    6464 command_runner.go:130] > cgroupfs
	I0229 19:13:10.691229    6464 cni.go:84] Creating CNI manager for ""
	I0229 19:13:10.691229    6464 cni.go:136] 3 nodes found, recommending kindnet
	I0229 19:13:10.691229    6464 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 19:13:10.691229    6464 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.62.204 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-421600 NodeName:multinode-421600-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.52.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.62.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 19:13:10.691229    6464 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.62.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-421600-m02"
	  kubeletExtraArgs:
	    node-ip: 172.26.62.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.52.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 19:13:10.691229    6464 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-421600-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.62.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-421600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 19:13:10.702553    6464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 19:13:10.726039    6464 command_runner.go:130] > kubeadm
	I0229 19:13:10.726039    6464 command_runner.go:130] > kubectl
	I0229 19:13:10.726039    6464 command_runner.go:130] > kubelet
	I0229 19:13:10.726114    6464 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 19:13:10.734324    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0229 19:13:10.752035    6464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0229 19:13:10.780596    6464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 19:13:10.824008    6464 ssh_runner.go:195] Run: grep 172.26.52.109	control-plane.minikube.internal$ /etc/hosts
	I0229 19:13:10.831297    6464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.52.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:13:10.855214    6464 host.go:66] Checking if "multinode-421600" exists ...
	I0229 19:13:10.855760    6464 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:13:10.855863    6464 start.go:304] JoinCluster: &{Name:multinode-421600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-421600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.26.52.109 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.62.204 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.26.50.77 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:13:10.855973    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0229 19:13:10.856083    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:13:12.831100    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:13:12.831377    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:12.831430    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:13:15.229344    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:13:15.229417    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:15.229718    6464 sshutil.go:53] new ssh client: &{IP:172.26.52.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 19:13:15.429323    6464 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 17ge30.k5c1ofp73y6h9sv3 --discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e 
	I0229 19:13:15.429396    6464 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5731699s)
	I0229 19:13:15.429510    6464 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.26.62.204 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 19:13:15.429579    6464 host.go:66] Checking if "multinode-421600" exists ...
	I0229 19:13:15.438876    6464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-421600-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0229 19:13:15.438876    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:13:17.470586    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:13:17.470659    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:17.470732    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:13:19.872640    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:13:19.872672    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:19.872729    6464 sshutil.go:53] new ssh client: &{IP:172.26.52.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 19:13:20.046770    6464 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0229 19:13:20.125085    6464 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-zblbg, kube-system/kube-proxy-7c7xc
	I0229 19:13:23.146785    6464 command_runner.go:130] > node/multinode-421600-m02 cordoned
	I0229 19:13:23.147182    6464 command_runner.go:130] > pod "busybox-5b5d89c9d6-dk9k8" has DeletionTimestamp older than 1 seconds, skipping
	I0229 19:13:23.147182    6464 command_runner.go:130] > node/multinode-421600-m02 drained
	I0229 19:13:23.147261    6464 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-421600-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (7.7079586s)
	I0229 19:13:23.147339    6464 node.go:108] successfully drained node "m02"
	I0229 19:13:23.148790    6464 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:13:23.149694    6464 kapi.go:59] client config for multinode-421600: &rest.Config{Host:"https://172.26.52.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 19:13:23.151174    6464 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0229 19:13:23.151288    6464 round_trippers.go:463] DELETE https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:13:23.151288    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:23.151288    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:23.151288    6464 round_trippers.go:473]     Content-Type: application/json
	I0229 19:13:23.151288    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:23.171999    6464 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0229 19:13:23.171999    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:23.171999    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:23.171999    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:23.171999    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:23.171999    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:23.171999    6464 round_trippers.go:580]     Content-Length: 171
	I0229 19:13:23.171999    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:23 GMT
	I0229 19:13:23.171999    6464 round_trippers.go:580]     Audit-Id: a69a1684-b3d0-4a0a-93e1-71f5cbeb3d40
	I0229 19:13:23.172211    6464 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-421600-m02","kind":"nodes","uid":"fdb7ee2c-ccad-4d0d-bef6-6790b83f5cb6"}}
	I0229 19:13:23.172321    6464 node.go:124] successfully deleted node "m02"
	I0229 19:13:23.172375    6464 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.26.62.204 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 19:13:23.172375    6464 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.26.62.204 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 19:13:23.172375    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 17ge30.k5c1ofp73y6h9sv3 --discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-421600-m02"
	I0229 19:13:23.402752    6464 command_runner.go:130] ! W0229 19:13:23.573454    1334 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0229 19:13:23.883833    6464 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:13:25.697954    6464 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 19:13:25.697999    6464 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0229 19:13:25.698037    6464 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0229 19:13:25.698072    6464 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:13:25.698072    6464 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:13:25.698096    6464 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 19:13:25.698096    6464 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0229 19:13:25.698096    6464 command_runner.go:130] > This node has joined the cluster:
	I0229 19:13:25.698096    6464 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0229 19:13:25.698096    6464 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0229 19:13:25.698096    6464 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0229 19:13:25.698096    6464 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 17ge30.k5c1ofp73y6h9sv3 --discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-421600-m02": (2.5255809s)
	I0229 19:13:25.698096    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0229 19:13:25.966056    6464 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0229 19:13:26.223822    6464 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=multinode-421600 minikube.k8s.io/updated_at=2024_02_29T19_13_26_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:13:26.402712    6464 command_runner.go:130] > node/multinode-421600-m02 labeled
	I0229 19:13:26.402712    6464 command_runner.go:130] > node/multinode-421600-m03 labeled
	I0229 19:13:26.402712    6464 start.go:306] JoinCluster complete in 15.5460907s
	I0229 19:13:26.402712    6464 cni.go:84] Creating CNI manager for ""
	I0229 19:13:26.402712    6464 cni.go:136] 3 nodes found, recommending kindnet
	I0229 19:13:26.411710    6464 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 19:13:26.420382    6464 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 19:13:26.420673    6464 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 19:13:26.420673    6464 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 19:13:26.420733    6464 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 19:13:26.420733    6464 command_runner.go:130] > Access: 2024-02-29 19:09:50.700291300 +0000
	I0229 19:13:26.420733    6464 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 19:13:26.420733    6464 command_runner.go:130] > Change: 2024-02-29 19:09:39.251000000 +0000
	I0229 19:13:26.420733    6464 command_runner.go:130] >  Birth: -
	I0229 19:13:26.421009    6464 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 19:13:26.421072    6464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 19:13:26.467255    6464 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 19:13:26.865768    6464 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 19:13:26.866260    6464 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 19:13:26.866260    6464 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 19:13:26.866260    6464 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 19:13:26.866907    6464 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:13:26.866907    6464 kapi.go:59] client config for multinode-421600: &rest.Config{Host:"https://172.26.52.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 19:13:26.867762    6464 round_trippers.go:463] GET https://172.26.52.109:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 19:13:26.867762    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:26.867762    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:26.867762    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:26.872324    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:13:26.872430    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:26.872454    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:26.872454    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:26.872454    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:26.872454    6464 round_trippers.go:580]     Content-Length: 292
	I0229 19:13:26.872515    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:27 GMT
	I0229 19:13:26.872515    6464 round_trippers.go:580]     Audit-Id: d475fcac-d67c-4db8-ad7e-bc501450809b
	I0229 19:13:26.872515    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:26.872579    6464 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9192a23-827d-4114-8861-df907bfdc0ef","resourceVersion":"1689","creationTimestamp":"2024-02-29T18:50:38Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 19:13:26.872579    6464 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-421600" context rescaled to 1 replicas
	I0229 19:13:26.872579    6464 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.26.62.204 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0229 19:13:26.873451    6464 out.go:177] * Verifying Kubernetes components...
	I0229 19:13:26.883449    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:13:26.912984    6464 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:13:26.913968    6464 kapi.go:59] client config for multinode-421600: &rest.Config{Host:"https://172.26.52.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 19:13:26.915203    6464 node_ready.go:35] waiting up to 6m0s for node "multinode-421600-m02" to be "Ready" ...
	I0229 19:13:26.915382    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:13:26.915382    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:26.915382    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:26.915490    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:26.918516    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:13:26.919517    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:26.919517    6464 round_trippers.go:580]     Audit-Id: 33ad315b-a7a1-4cda-bb9b-81c4819dcae8
	I0229 19:13:26.919517    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:26.919517    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:26.919517    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:26.919517    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:26.919517    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:27 GMT
	I0229 19:13:26.919517    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"338c4461-534d-49a5-942c-1346a36627e6","resourceVersion":"1839","creationTimestamp":"2024-02-29T19:13:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_13_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3558 chars]
	I0229 19:13:27.426048    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:13:27.426183    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:27.426183    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:27.426254    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:27.430672    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:13:27.430756    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:27.430828    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:27.430828    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:27.430828    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:27 GMT
	I0229 19:13:27.430828    6464 round_trippers.go:580]     Audit-Id: c4880682-de8b-49ff-a02a-37cbf1db2387
	I0229 19:13:27.430828    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:27.430919    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:27.431049    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"338c4461-534d-49a5-942c-1346a36627e6","resourceVersion":"1839","creationTimestamp":"2024-02-29T19:13:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_13_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3558 chars]
	I0229 19:13:27.916723    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:13:27.916758    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:27.916758    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:27.916758    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:27.920464    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:13:27.921013    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:27.921065    6464 round_trippers.go:580]     Audit-Id: 3ac90f97-4a2c-4391-8e5e-038b6886fd99
	I0229 19:13:27.921065    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:27.921065    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:27.921065    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:27.921065    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:27.921151    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:28 GMT
	I0229 19:13:27.921379    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"338c4461-534d-49a5-942c-1346a36627e6","resourceVersion":"1839","creationTimestamp":"2024-02-29T19:13:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_13_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3558 chars]
	I0229 19:13:28.420529    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:13:28.420616    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:28.420616    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:28.420616    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:28.427894    6464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 19:13:28.427894    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:28.427894    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:28.427894    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:28 GMT
	I0229 19:13:28.427894    6464 round_trippers.go:580]     Audit-Id: c4697f6a-844f-4e34-8a29-6d0b847c8b6c
	I0229 19:13:28.427894    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:28.427894    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:28.427894    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:28.428803    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"338c4461-534d-49a5-942c-1346a36627e6","resourceVersion":"1851","creationTimestamp":"2024-02-29T19:13:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_13_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3667 chars]
	I0229 19:13:28.924140    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:13:28.924350    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:28.924350    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:28.924350    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:28.930848    6464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 19:13:28.930848    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:28.930848    6464 round_trippers.go:580]     Audit-Id: 3b10fecf-f4ff-484b-884a-62aeff0bcb62
	I0229 19:13:28.930848    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:28.930848    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:28.930848    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:28.930848    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:28.930848    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:29 GMT
	I0229 19:13:28.931911    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"338c4461-534d-49a5-942c-1346a36627e6","resourceVersion":"1851","creationTimestamp":"2024-02-29T19:13:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_13_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3667 chars]
	I0229 19:13:28.931911    6464 node_ready.go:58] node "multinode-421600-m02" has status "Ready":"False"
	I0229 19:13:29.425773    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:13:29.426000    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:29.426000    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:29.426000    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:29.429380    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:13:29.429536    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:29.429536    6464 round_trippers.go:580]     Audit-Id: 43691923-39a2-4901-a87f-e10af82b674e
	I0229 19:13:29.429536    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:29.429536    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:29.429536    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:29.429536    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:29.429536    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:29 GMT
	I0229 19:13:29.429833    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"338c4461-534d-49a5-942c-1346a36627e6","resourceVersion":"1851","creationTimestamp":"2024-02-29T19:13:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_13_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3667 chars]
	I0229 19:13:29.930704    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:13:29.930704    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:29.930704    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:29.930799    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:29.934125    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:13:29.934125    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:29.934125    6464 round_trippers.go:580]     Audit-Id: ac8a6a89-1f4b-46d8-9c0a-60adea9d3f11
	I0229 19:13:29.934125    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:29.934125    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:29.934125    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:29.934125    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:29.934616    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:30 GMT
	I0229 19:13:29.934762    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"338c4461-534d-49a5-942c-1346a36627e6","resourceVersion":"1856","creationTimestamp":"2024-02-29T19:13:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_13_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3925 chars]
	I0229 19:13:29.935051    6464 node_ready.go:49] node "multinode-421600-m02" has status "Ready":"True"
	I0229 19:13:29.935051    6464 node_ready.go:38] duration metric: took 3.0195911s waiting for node "multinode-421600-m02" to be "Ready" ...
	I0229 19:13:29.935051    6464 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:13:29.935051    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods
	I0229 19:13:29.935051    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:29.935051    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:29.935051    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:29.940666    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:13:29.940666    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:29.940666    6464 round_trippers.go:580]     Audit-Id: e1612164-3c3e-407d-afac-3793a17dd319
	I0229 19:13:29.940666    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:29.940666    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:29.940666    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:29.940666    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:29.940666    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:30 GMT
	I0229 19:13:29.943302    6464 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1858"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1685","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83369 chars]
	I0229 19:13:29.946201    6464 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:29.946201    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:13:29.946201    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:29.946201    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:29.946201    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:29.950235    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:13:29.950235    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:29.950235    6464 round_trippers.go:580]     Audit-Id: 76e72bb6-98f3-44be-8efe-ed5eafb51d6a
	I0229 19:13:29.950235    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:29.950235    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:29.950235    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:29.950235    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:29.950235    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:30 GMT
	I0229 19:13:29.950235    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1685","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0229 19:13:29.951215    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:13:29.951215    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:29.951215    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:29.951215    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:29.954211    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:13:29.954211    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:29.954211    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:29.954211    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:30 GMT
	I0229 19:13:29.954211    6464 round_trippers.go:580]     Audit-Id: 8354422f-2b00-4faa-8e9a-aa67bfffeac2
	I0229 19:13:29.954211    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:29.954211    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:29.954211    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:29.955306    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:13:29.955769    6464 pod_ready.go:92] pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:29.955831    6464 pod_ready.go:81] duration metric: took 9.629ms waiting for pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:29.955831    6464 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:29.955958    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-421600
	I0229 19:13:29.955958    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:29.956032    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:29.956032    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:29.959012    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:13:29.959609    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:29.959609    6464 round_trippers.go:580]     Audit-Id: 83cd848a-6bda-4540-a8e5-ee128cb6f150
	I0229 19:13:29.959609    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:29.959609    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:29.959609    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:29.959609    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:29.959609    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:30 GMT
	I0229 19:13:29.960057    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-421600","namespace":"kube-system","uid":"a57a6b03-e79b-4fcd-8750-480d46e6feb7","resourceVersion":"1655","creationTimestamp":"2024-02-29T19:11:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.109:2379","kubernetes.io/config.hash":"ffd246c3f34c2bcd65e63e05d5465206","kubernetes.io/config.mirror":"ffd246c3f34c2bcd65e63e05d5465206","kubernetes.io/config.seen":"2024-02-29T19:11:04.922860790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:11:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0229 19:13:29.960562    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:13:29.960626    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:29.960626    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:29.960626    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:29.967062    6464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 19:13:29.967062    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:29.967062    6464 round_trippers.go:580]     Audit-Id: 46550eee-1e82-4375-8ab4-c867ada5df28
	I0229 19:13:29.967062    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:29.967062    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:29.967062    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:29.967062    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:29.967062    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:30 GMT
	I0229 19:13:29.967062    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:13:29.967062    6464 pod_ready.go:92] pod "etcd-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:29.967062    6464 pod_ready.go:81] duration metric: took 11.2309ms waiting for pod "etcd-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:29.967062    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:29.967062    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-421600
	I0229 19:13:29.967062    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:29.967062    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:29.967062    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:29.971057    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:13:29.971057    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:29.971057    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:29.971057    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:30 GMT
	I0229 19:13:29.971057    6464 round_trippers.go:580]     Audit-Id: 65d7426d-285b-4020-bdf5-eb13aa886e6d
	I0229 19:13:29.971057    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:29.971057    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:29.971057    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:29.971737    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-421600","namespace":"kube-system","uid":"456b1ada-afd0-416c-a95f-71bea88e161d","resourceVersion":"1658","creationTimestamp":"2024-02-29T19:11:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.52.109:8443","kubernetes.io/config.hash":"aec335819ecb0b3c60068e2ed02eb80d","kubernetes.io/config.mirror":"aec335819ecb0b3c60068e2ed02eb80d","kubernetes.io/config.seen":"2024-02-29T19:11:04.922862090Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:11:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0229 19:13:29.972239    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:13:29.972297    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:29.972297    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:29.972297    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:29.974653    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:13:29.974653    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:29.974653    6464 round_trippers.go:580]     Audit-Id: 68bbce86-7dd9-405c-8519-8d78dee40507
	I0229 19:13:29.974653    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:29.974653    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:29.974653    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:29.974653    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:29.974653    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:30 GMT
	I0229 19:13:29.974653    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:13:29.975647    6464 pod_ready.go:92] pod "kube-apiserver-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:29.975647    6464 pod_ready.go:81] duration metric: took 8.5845ms waiting for pod "kube-apiserver-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:29.975647    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:29.975647    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-421600
	I0229 19:13:29.975647    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:29.975647    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:29.975647    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:29.978667    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:13:29.978667    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:29.978667    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:29.978667    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:29.978667    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:29.978667    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:30 GMT
	I0229 19:13:29.978667    6464 round_trippers.go:580]     Audit-Id: e30cd4cb-27cf-4532-8088-8fe52f938aa6
	I0229 19:13:29.978667    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:29.979105    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-421600","namespace":"kube-system","uid":"a41ee888-f6df-43d4-9799-67a9ef0b6c87","resourceVersion":"1646","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dabef371df5cd2a8b883d06621dfc6bd","kubernetes.io/config.mirror":"dabef371df5cd2a8b883d06621dfc6bd","kubernetes.io/config.seen":"2024-02-29T18:50:38.626332146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0229 19:13:29.979300    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:13:29.979300    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:29.979300    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:29.979651    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:29.981663    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:13:29.981663    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:29.981663    6464 round_trippers.go:580]     Audit-Id: daf19e4d-3ebe-4377-a9e4-2a4fbd09fcb8
	I0229 19:13:29.981663    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:29.981663    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:29.981663    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:29.981663    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:29.981663    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:30 GMT
	I0229 19:13:29.982665    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:13:29.982665    6464 pod_ready.go:92] pod "kube-controller-manager-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:29.982665    6464 pod_ready.go:81] duration metric: took 7.0176ms waiting for pod "kube-controller-manager-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:29.982665    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7c7xc" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.146052    6464 request.go:629] Waited for 163.1564ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7c7xc
	I0229 19:13:30.146052    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7c7xc
	I0229 19:13:30.146052    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:30.146052    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:30.146052    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:30.150732    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:13:30.150795    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:30.150795    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:30.150795    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:30.150795    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:30.150795    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:30.150795    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:30 GMT
	I0229 19:13:30.150795    6464 round_trippers.go:580]     Audit-Id: d656e1a0-dfb6-49d9-9cd3-1e8c4bcf3666
	I0229 19:13:30.150795    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7c7xc","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f8e7fe9-d8e5-47ca-80fd-7e5f7ae43140","resourceVersion":"1844","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0229 19:13:30.334170    6464 request.go:629] Waited for 182.1724ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:13:30.334268    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:13:30.334268    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:30.334268    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:30.334268    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:30.338602    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:13:30.338602    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:30.338602    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:30.338602    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:30.338602    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:30 GMT
	I0229 19:13:30.338602    6464 round_trippers.go:580]     Audit-Id: 330e0b42-279a-4dcb-97d2-668da0e2c741
	I0229 19:13:30.338602    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:30.338602    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:30.339599    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"338c4461-534d-49a5-942c-1346a36627e6","resourceVersion":"1856","creationTimestamp":"2024-02-29T19:13:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_13_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3925 chars]
	I0229 19:13:30.339971    6464 pod_ready.go:92] pod "kube-proxy-7c7xc" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:30.340057    6464 pod_ready.go:81] duration metric: took 357.2863ms waiting for pod "kube-proxy-7c7xc" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.340057    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpk6m" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.537477    6464 request.go:629] Waited for 196.8621ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpk6m
	I0229 19:13:30.537662    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpk6m
	I0229 19:13:30.537662    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:30.537720    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:30.537748    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:30.542597    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:13:30.542673    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:30.542673    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:30.542673    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:30 GMT
	I0229 19:13:30.542776    6464 round_trippers.go:580]     Audit-Id: 4cc69736-7e1b-42ad-9a96-8e61d0fe4d7e
	I0229 19:13:30.542776    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:30.542776    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:30.542776    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:30.543001    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fpk6m","generateName":"kube-proxy-","namespace":"kube-system","uid":"4c99c6ec-5ab0-434d-b5a9-cb24b10f8bbf","resourceVersion":"1574","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0229 19:13:30.742425    6464 request.go:629] Waited for 197.899ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:13:30.742548    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:13:30.742548    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:30.742548    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:30.742644    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:30.746595    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:13:30.746595    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:30.746717    6464 round_trippers.go:580]     Audit-Id: 6a896730-b790-4eba-9436-bcbc276842ef
	I0229 19:13:30.746717    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:30.746717    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:30.746717    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:30.746717    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:30.746785    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:30 GMT
	I0229 19:13:30.747172    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:13:30.748001    6464 pod_ready.go:92] pod "kube-proxy-fpk6m" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:30.748058    6464 pod_ready.go:81] duration metric: took 407.9209ms waiting for pod "kube-proxy-fpk6m" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.748058    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rhg8l" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:30.945333    6464 request.go:629] Waited for 196.9333ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rhg8l
	I0229 19:13:30.945458    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rhg8l
	I0229 19:13:30.945458    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:30.945559    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:30.945559    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:30.953889    6464 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0229 19:13:30.953889    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:30.953889    6464 round_trippers.go:580]     Audit-Id: 2461d593-924d-4d25-81a1-176ba9eafed7
	I0229 19:13:30.953889    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:30.953889    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:30.953889    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:30.953889    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:30.953889    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:31 GMT
	I0229 19:13:30.954679    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rhg8l","generateName":"kube-proxy-","namespace":"kube-system","uid":"58dfdc35-3e50-486d-b7a7-5bae65934cd5","resourceVersion":"1718","creationTimestamp":"2024-02-29T18:57:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:57:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5968 chars]
	I0229 19:13:31.134745    6464 request.go:629] Waited for 180.0112ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:13:31.134745    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:13:31.134745    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:31.134745    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:31.134745    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:31.141754    6464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 19:13:31.141754    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:31.142697    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:31.142722    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:31.142722    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:31 GMT
	I0229 19:13:31.142722    6464 round_trippers.go:580]     Audit-Id: a74d7216-946b-4bce-abc2-63e98aef0db0
	I0229 19:13:31.142722    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:31.142722    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:31.142929    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"096122b8-0719-4361-9b63-57130df92d29","resourceVersion":"1840","creationTimestamp":"2024-02-29T19:07:14Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_13_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:07:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4391 chars]
	I0229 19:13:31.143320    6464 pod_ready.go:97] node "multinode-421600-m03" hosting pod "kube-proxy-rhg8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-421600-m03" has status "Ready":"Unknown"
	I0229 19:13:31.143391    6464 pod_ready.go:81] duration metric: took 395.3106ms waiting for pod "kube-proxy-rhg8l" in "kube-system" namespace to be "Ready" ...
	E0229 19:13:31.143391    6464 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-421600-m03" hosting pod "kube-proxy-rhg8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-421600-m03" has status "Ready":"Unknown"
	I0229 19:13:31.143391    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:31.337996    6464 request.go:629] Waited for 194.3103ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-421600
	I0229 19:13:31.337996    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-421600
	I0229 19:13:31.337996    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:31.337996    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:31.337996    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:31.341741    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:13:31.341741    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:31.341741    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:31 GMT
	I0229 19:13:31.341741    6464 round_trippers.go:580]     Audit-Id: 52f5ac15-b806-43e9-b7d0-237f929b4ec6
	I0229 19:13:31.341741    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:31.341741    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:31.341741    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:31.341741    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:31.342686    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-421600","namespace":"kube-system","uid":"6742b97c-a3db-4fca-8da3-54fcde6d405a","resourceVersion":"1669","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a2c94c0a4c322f0bf7fcafad0430344f","kubernetes.io/config.mirror":"a2c94c0a4c322f0bf7fcafad0430344f","kubernetes.io/config.seen":"2024-02-29T18:50:38.626333146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0229 19:13:31.540137    6464 request.go:629] Waited for 196.738ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:13:31.540370    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:13:31.540488    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:31.540508    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:31.540508    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:31.544939    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:13:31.544939    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:31.544939    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:31.544939    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:31.544939    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:31.544939    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:31.545237    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:31 GMT
	I0229 19:13:31.545237    6464 round_trippers.go:580]     Audit-Id: 444eaa89-3e46-48d4-9928-43e6a997c129
	I0229 19:13:31.545420    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:13:31.546018    6464 pod_ready.go:92] pod "kube-scheduler-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 19:13:31.546082    6464 pod_ready.go:81] duration metric: took 402.6044ms waiting for pod "kube-scheduler-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:13:31.546082    6464 pod_ready.go:38] duration metric: took 1.6109414s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:13:31.546147    6464 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:13:31.555810    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:13:31.586173    6464 system_svc.go:56] duration metric: took 39.9658ms WaitForService to wait for kubelet.
	I0229 19:13:31.586173    6464 kubeadm.go:581] duration metric: took 4.7133322s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:13:31.586286    6464 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:13:31.731168    6464 request.go:629] Waited for 144.6538ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes
	I0229 19:13:31.731256    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes
	I0229 19:13:31.731256    6464 round_trippers.go:469] Request Headers:
	I0229 19:13:31.731256    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:13:31.731256    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:13:31.734554    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:13:31.735260    6464 round_trippers.go:577] Response Headers:
	I0229 19:13:31.735260    6464 round_trippers.go:580]     Audit-Id: 6878a424-4200-46e2-8d3c-2b0e3ba26a09
	I0229 19:13:31.735260    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:13:31.735260    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:13:31.735260    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:13:31.735334    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:13:31.735449    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:13:31 GMT
	I0229 19:13:31.736010    6464 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1860"},"items":[{"metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15591 chars]
	I0229 19:13:31.736997    6464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:13:31.736997    6464 node_conditions.go:123] node cpu capacity is 2
	I0229 19:13:31.737065    6464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:13:31.737065    6464 node_conditions.go:123] node cpu capacity is 2
	I0229 19:13:31.737065    6464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:13:31.737065    6464 node_conditions.go:123] node cpu capacity is 2
	I0229 19:13:31.737065    6464 node_conditions.go:105] duration metric: took 150.7707ms to run NodePressure ...
	I0229 19:13:31.737065    6464 start.go:228] waiting for startup goroutines ...
	I0229 19:13:31.737137    6464 start.go:242] writing updated cluster config ...
	I0229 19:13:31.755624    6464 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:13:31.755752    6464 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\config.json ...
	I0229 19:13:31.759238    6464 out.go:177] * Starting worker node multinode-421600-m03 in cluster multinode-421600
	I0229 19:13:31.759951    6464 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 19:13:31.760026    6464 cache.go:56] Caching tarball of preloaded images
	I0229 19:13:31.760447    6464 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 19:13:31.760607    6464 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 19:13:31.760820    6464 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\config.json ...
	I0229 19:13:31.773156    6464 start.go:365] acquiring machines lock for multinode-421600-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 19:13:31.773501    6464 start.go:369] acquired machines lock for "multinode-421600-m03" in 345µs
	I0229 19:13:31.773501    6464 start.go:96] Skipping create...Using existing machine configuration
	I0229 19:13:31.773501    6464 fix.go:54] fixHost starting: m03
	I0229 19:13:31.774152    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:13:33.755243    6464 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 19:13:33.755243    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:33.755243    6464 fix.go:102] recreateIfNeeded on multinode-421600-m03: state=Stopped err=<nil>
	W0229 19:13:33.755243    6464 fix.go:128] unexpected machine state, will restart: <nil>
	I0229 19:13:33.756003    6464 out.go:177] * Restarting existing hyperv VM for "multinode-421600-m03" ...
	I0229 19:13:33.756953    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-421600-m03
	I0229 19:13:36.499223    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:13:36.499223    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:36.499223    6464 main.go:141] libmachine: Waiting for host to start...
	I0229 19:13:36.499223    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:13:38.575719    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:13:38.576735    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:38.576779    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:13:40.885706    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:13:40.885706    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:41.897053    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:13:43.910092    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:13:43.910984    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:43.911037    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:13:46.240311    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:13:46.240344    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:47.243110    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:13:49.282086    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:13:49.282086    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:49.282086    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:13:51.616682    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:13:51.617702    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:52.629143    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:13:54.676227    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:13:54.676227    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:54.676227    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:13:57.009864    6464 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:13:57.009864    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:13:58.022928    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:14:00.078674    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:14:00.078822    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:00.078822    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:14:02.448605    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:14:02.448605    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:02.451930    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:14:04.436514    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:14:04.436514    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:04.436514    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:14:06.856538    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:14:06.856538    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:06.856966    6464 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600\config.json ...
	I0229 19:14:06.859443    6464 machine.go:88] provisioning docker machine ...
	I0229 19:14:06.859545    6464 buildroot.go:166] provisioning hostname "multinode-421600-m03"
	I0229 19:14:06.859614    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:14:08.880217    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:14:08.880217    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:08.880217    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:14:11.268306    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:14:11.269167    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:11.272942    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:14:11.273412    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.59.9 22 <nil> <nil>}
	I0229 19:14:11.273412    6464 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-421600-m03 && echo "multinode-421600-m03" | sudo tee /etc/hostname
	I0229 19:14:11.428726    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-421600-m03
	
	I0229 19:14:11.428866    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:14:13.409551    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:14:13.409744    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:13.409744    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:14:15.808256    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:14:15.808512    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:15.816679    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:14:15.817430    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.59.9 22 <nil> <nil>}
	I0229 19:14:15.817430    6464 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-421600-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-421600-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-421600-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 19:14:15.955751    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:14:15.955828    6464 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 19:14:15.955869    6464 buildroot.go:174] setting up certificates
	I0229 19:14:15.955901    6464 provision.go:83] configureAuth start
	I0229 19:14:15.956020    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:14:17.918866    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:14:17.918866    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:17.918866    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:14:20.290045    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:14:20.290359    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:20.290624    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:14:22.305028    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:14:22.305028    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:22.305126    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:14:24.673178    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:14:24.673178    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:24.673178    6464 provision.go:138] copyHostCerts
	I0229 19:14:24.673361    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0229 19:14:24.673538    6464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 19:14:24.673538    6464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 19:14:24.673654    6464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 19:14:24.674782    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0229 19:14:24.674847    6464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 19:14:24.674847    6464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 19:14:24.674847    6464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 19:14:24.675507    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0229 19:14:24.676029    6464 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 19:14:24.676029    6464 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 19:14:24.676311    6464 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 19:14:24.677100    6464 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-421600-m03 san=[172.26.59.9 172.26.59.9 localhost 127.0.0.1 minikube multinode-421600-m03]
	I0229 19:14:25.257761    6464 provision.go:172] copyRemoteCerts
	I0229 19:14:25.265304    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 19:14:25.265304    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:14:27.224608    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:14:27.224758    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:27.224870    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:14:29.643304    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:14:29.643304    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:29.643304    6464 sshutil.go:53] new ssh client: &{IP:172.26.59.9 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m03\id_rsa Username:docker}
	I0229 19:14:29.741189    6464 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4756357s)
	I0229 19:14:29.741189    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0229 19:14:29.741189    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 19:14:29.786057    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0229 19:14:29.786983    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0229 19:14:29.837717    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0229 19:14:29.838219    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 19:14:29.890852    6464 provision.go:86] duration metric: configureAuth took 13.9341333s
	I0229 19:14:29.890852    6464 buildroot.go:189] setting minikube options for container-runtime
	I0229 19:14:29.890852    6464 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:14:29.890852    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:14:31.869672    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:14:31.869672    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:31.869816    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:14:34.219586    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:14:34.219586    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:34.225617    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:14:34.226407    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.59.9 22 <nil> <nil>}
	I0229 19:14:34.226407    6464 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 19:14:34.350626    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 19:14:34.350626    6464 buildroot.go:70] root file system type: tmpfs
	I0229 19:14:34.350919    6464 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 19:14:34.351004    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:14:36.307087    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:14:36.307087    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:36.307087    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:14:38.691778    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:14:38.691778    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:38.696386    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:14:38.696794    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.59.9 22 <nil> <nil>}
	I0229 19:14:38.696928    6464 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.26.52.109"
	Environment="NO_PROXY=172.26.52.109,172.26.62.204"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 19:14:38.860814    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.26.52.109
	Environment=NO_PROXY=172.26.52.109,172.26.62.204
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 19:14:38.860814    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:14:40.823740    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:14:40.824625    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:40.824859    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:14:43.154826    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:14:43.154826    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:43.158950    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:14:43.159532    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.59.9 22 <nil> <nil>}
	I0229 19:14:43.159532    6464 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 19:14:44.365837    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 19:14:44.365837    6464 machine.go:91] provisioned docker machine in 37.5042381s
	I0229 19:14:44.365837    6464 start.go:300] post-start starting for "multinode-421600-m03" (driver="hyperv")
	I0229 19:14:44.366389    6464 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 19:14:44.379746    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 19:14:44.379746    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:14:46.373845    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:14:46.373845    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:46.373942    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:14:48.769890    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:14:48.769890    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:48.771015    6464 sshutil.go:53] new ssh client: &{IP:172.26.59.9 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m03\id_rsa Username:docker}
	I0229 19:14:48.876961    6464 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4968688s)
	I0229 19:14:48.885950    6464 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 19:14:48.893155    6464 command_runner.go:130] > NAME=Buildroot
	I0229 19:14:48.893155    6464 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0229 19:14:48.893203    6464 command_runner.go:130] > ID=buildroot
	I0229 19:14:48.893203    6464 command_runner.go:130] > VERSION_ID=2023.02.9
	I0229 19:14:48.893203    6464 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0229 19:14:48.893203    6464 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 19:14:48.893737    6464 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 19:14:48.894093    6464 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 19:14:48.894754    6464 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> 43562.pem in /etc/ssl/certs
	I0229 19:14:48.894754    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> /etc/ssl/certs/43562.pem
	I0229 19:14:48.917125    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 19:14:48.942245    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /etc/ssl/certs/43562.pem (1708 bytes)
	I0229 19:14:48.986839    6464 start.go:303] post-start completed in 4.6207455s
	I0229 19:14:48.986839    6464 fix.go:56] fixHost completed within 1m17.2090448s
	I0229 19:14:48.986839    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:14:50.984989    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:14:50.984989    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:50.985790    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:14:53.375386    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:14:53.375386    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:53.380205    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:14:53.380768    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.59.9 22 <nil> <nil>}
	I0229 19:14:53.380768    6464 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0229 19:14:53.503636    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709234093.671343878
	
	I0229 19:14:53.503636    6464 fix.go:206] guest clock: 1709234093.671343878
	I0229 19:14:53.503636    6464 fix.go:219] Guest: 2024-02-29 19:14:53.671343878 +0000 UTC Remote: 2024-02-29 19:14:48.9868394 +0000 UTC m=+333.333493201 (delta=4.684504478s)
	I0229 19:14:53.503636    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:14:55.472836    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:14:55.473815    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:55.473815    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:14:57.848945    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:14:57.848945    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:57.854025    6464 main.go:141] libmachine: Using SSH client type: native
	I0229 19:14:57.854174    6464 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.59.9 22 <nil> <nil>}
	I0229 19:14:57.854174    6464 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709234093
	I0229 19:14:57.990967    6464 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 19:14:53 UTC 2024
	
	I0229 19:14:57.990967    6464 fix.go:226] clock set: Thu Feb 29 19:14:53 UTC 2024
	 (err=<nil>)
	I0229 19:14:57.990967    6464 start.go:83] releasing machines lock for "multinode-421600-m03", held for 1m26.2126725s
	I0229 19:14:57.991516    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:14:59.981495    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:14:59.981495    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:14:59.981569    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:15:02.377866    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:15:02.377866    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:15:02.378779    6464 out.go:177] * Found network options:
	I0229 19:15:02.379587    6464 out.go:177]   - NO_PROXY=172.26.52.109,172.26.62.204
	W0229 19:15:02.380208    6464 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 19:15:02.380276    6464 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 19:15:02.380777    6464 out.go:177]   - NO_PROXY=172.26.52.109,172.26.62.204
	W0229 19:15:02.381309    6464 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 19:15:02.381405    6464 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 19:15:02.382607    6464 proxy.go:119] fail to check proxy env: Error ip not in block
	W0229 19:15:02.382722    6464 proxy.go:119] fail to check proxy env: Error ip not in block
	I0229 19:15:02.385712    6464 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 19:15:02.385872    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:15:02.392986    6464 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0229 19:15:02.393707    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:15:04.419908    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:15:04.419908    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:15:04.420021    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:15:04.435935    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:15:04.435935    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:15:04.436950    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m03 ).networkadapters[0]).ipaddresses[0]
	I0229 19:15:06.868408    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:15:06.868535    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:15:06.868535    6464 sshutil.go:53] new ssh client: &{IP:172.26.59.9 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m03\id_rsa Username:docker}
	I0229 19:15:06.892965    6464 main.go:141] libmachine: [stdout =====>] : 172.26.59.9
	
	I0229 19:15:06.892965    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:15:06.893226    6464 sshutil.go:53] new ssh client: &{IP:172.26.59.9 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m03\id_rsa Username:docker}
	I0229 19:15:07.037113    6464 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0229 19:15:07.037298    6464 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6511681s)
	I0229 19:15:07.037386    6464 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0229 19:15:07.037484    6464 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.6441416s)
	W0229 19:15:07.037484    6464 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 19:15:07.048205    6464 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 19:15:07.079404    6464 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0229 19:15:07.080090    6464 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 19:15:07.080153    6464 start.go:475] detecting cgroup driver to use...
	I0229 19:15:07.080450    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:15:07.116364    6464 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0229 19:15:07.128705    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 19:15:07.160115    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 19:15:07.180557    6464 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 19:15:07.192230    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 19:15:07.226133    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 19:15:07.257572    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 19:15:07.287733    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 19:15:07.316105    6464 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 19:15:07.346183    6464 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 19:15:07.377706    6464 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 19:15:07.395068    6464 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0229 19:15:07.406364    6464 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 19:15:07.438911    6464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:15:07.642034    6464 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 19:15:07.674596    6464 start.go:475] detecting cgroup driver to use...
	I0229 19:15:07.684078    6464 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 19:15:07.707883    6464 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0229 19:15:07.707927    6464 command_runner.go:130] > [Unit]
	I0229 19:15:07.707961    6464 command_runner.go:130] > Description=Docker Application Container Engine
	I0229 19:15:07.707961    6464 command_runner.go:130] > Documentation=https://docs.docker.com
	I0229 19:15:07.707961    6464 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0229 19:15:07.708002    6464 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0229 19:15:07.708002    6464 command_runner.go:130] > StartLimitBurst=3
	I0229 19:15:07.708002    6464 command_runner.go:130] > StartLimitIntervalSec=60
	I0229 19:15:07.708002    6464 command_runner.go:130] > [Service]
	I0229 19:15:07.708037    6464 command_runner.go:130] > Type=notify
	I0229 19:15:07.708037    6464 command_runner.go:130] > Restart=on-failure
	I0229 19:15:07.708037    6464 command_runner.go:130] > Environment=NO_PROXY=172.26.52.109
	I0229 19:15:07.708060    6464 command_runner.go:130] > Environment=NO_PROXY=172.26.52.109,172.26.62.204
	I0229 19:15:07.708060    6464 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0229 19:15:07.708060    6464 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0229 19:15:07.708060    6464 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0229 19:15:07.708060    6464 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0229 19:15:07.708060    6464 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0229 19:15:07.708060    6464 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0229 19:15:07.708060    6464 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0229 19:15:07.708060    6464 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0229 19:15:07.708060    6464 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0229 19:15:07.708060    6464 command_runner.go:130] > ExecStart=
	I0229 19:15:07.708060    6464 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0229 19:15:07.708060    6464 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0229 19:15:07.708060    6464 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0229 19:15:07.708060    6464 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0229 19:15:07.708060    6464 command_runner.go:130] > LimitNOFILE=infinity
	I0229 19:15:07.708060    6464 command_runner.go:130] > LimitNPROC=infinity
	I0229 19:15:07.708060    6464 command_runner.go:130] > LimitCORE=infinity
	I0229 19:15:07.708060    6464 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0229 19:15:07.708060    6464 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0229 19:15:07.708060    6464 command_runner.go:130] > TasksMax=infinity
	I0229 19:15:07.708060    6464 command_runner.go:130] > TimeoutStartSec=0
	I0229 19:15:07.708060    6464 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0229 19:15:07.708060    6464 command_runner.go:130] > Delegate=yes
	I0229 19:15:07.708060    6464 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0229 19:15:07.708060    6464 command_runner.go:130] > KillMode=process
	I0229 19:15:07.708060    6464 command_runner.go:130] > [Install]
	I0229 19:15:07.708060    6464 command_runner.go:130] > WantedBy=multi-user.target
	I0229 19:15:07.717800    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 19:15:07.750028    6464 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 19:15:07.786604    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 19:15:07.819693    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 19:15:07.853602    6464 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 19:15:07.910455    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 19:15:07.935957    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:15:07.973120    6464 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0229 19:15:07.982186    6464 ssh_runner.go:195] Run: which cri-dockerd
	I0229 19:15:07.987753    6464 command_runner.go:130] > /usr/bin/cri-dockerd
	I0229 19:15:07.996925    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 19:15:08.014529    6464 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 19:15:08.053159    6464 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 19:15:08.248169    6464 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 19:15:08.424001    6464 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 19:15:08.424001    6464 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 19:15:08.469013    6464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:15:08.665031    6464 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 19:15:10.232022    6464 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5669037s)
	I0229 19:15:10.241357    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 19:15:10.278221    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 19:15:10.315558    6464 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 19:15:10.517284    6464 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 19:15:10.725354    6464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:15:10.923357    6464 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 19:15:10.967079    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 19:15:11.001658    6464 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:15:11.198999    6464 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 19:15:11.302500    6464 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 19:15:11.313696    6464 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 19:15:11.322082    6464 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0229 19:15:11.322082    6464 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0229 19:15:11.322082    6464 command_runner.go:130] > Device: 0,22	Inode: 848         Links: 1
	I0229 19:15:11.322082    6464 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0229 19:15:11.322082    6464 command_runner.go:130] > Access: 2024-02-29 19:15:11.393966511 +0000
	I0229 19:15:11.322082    6464 command_runner.go:130] > Modify: 2024-02-29 19:15:11.393966511 +0000
	I0229 19:15:11.322082    6464 command_runner.go:130] > Change: 2024-02-29 19:15:11.397966681 +0000
	I0229 19:15:11.322082    6464 command_runner.go:130] >  Birth: -
	I0229 19:15:11.322082    6464 start.go:543] Will wait 60s for crictl version
	I0229 19:15:11.331188    6464 ssh_runner.go:195] Run: which crictl
	I0229 19:15:11.337394    6464 command_runner.go:130] > /usr/bin/crictl
	I0229 19:15:11.347846    6464 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 19:15:11.423747    6464 command_runner.go:130] > Version:  0.1.0
	I0229 19:15:11.423808    6464 command_runner.go:130] > RuntimeName:  docker
	I0229 19:15:11.423808    6464 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0229 19:15:11.423861    6464 command_runner.go:130] > RuntimeApiVersion:  v1
	I0229 19:15:11.423903    6464 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 19:15:11.432388    6464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 19:15:11.465056    6464 command_runner.go:130] > 24.0.7
	I0229 19:15:11.475748    6464 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 19:15:11.509724    6464 command_runner.go:130] > 24.0.7
	I0229 19:15:11.511814    6464 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 19:15:11.512434    6464 out.go:177]   - env NO_PROXY=172.26.52.109
	I0229 19:15:11.512682    6464 out.go:177]   - env NO_PROXY=172.26.52.109,172.26.62.204
	I0229 19:15:11.513510    6464 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 19:15:11.517980    6464 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 19:15:11.517980    6464 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 19:15:11.517980    6464 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 19:15:11.518059    6464 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:76:3f:19 Flags:up|broadcast|multicast|running}
	I0229 19:15:11.521102    6464 ip.go:210] interface addr: fe80::841a:4367:8c9:abc/64
	I0229 19:15:11.521102    6464 ip.go:210] interface addr: 172.26.48.1/20
	I0229 19:15:11.530922    6464 ssh_runner.go:195] Run: grep 172.26.48.1	host.minikube.internal$ /etc/hosts
	I0229 19:15:11.537654    6464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:15:11.559141    6464 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-421600 for IP: 172.26.59.9
	I0229 19:15:11.559193    6464 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:15:11.559285    6464 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 19:15:11.560204    6464 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 19:15:11.560475    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0229 19:15:11.560708    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0229 19:15:11.560867    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0229 19:15:11.560943    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0229 19:15:11.561378    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem (1338 bytes)
	W0229 19:15:11.561531    6464 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356_empty.pem, impossibly tiny 0 bytes
	I0229 19:15:11.561692    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 19:15:11.561825    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 19:15:11.562077    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 19:15:11.562235    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 19:15:11.562608    6464 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem (1708 bytes)
	I0229 19:15:11.562710    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:15:11.562884    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem -> /usr/share/ca-certificates/4356.pem
	I0229 19:15:11.562981    6464 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> /usr/share/ca-certificates/43562.pem
	I0229 19:15:11.563764    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 19:15:11.612071    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 19:15:11.659164    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 19:15:11.709824    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 19:15:11.756368    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 19:15:11.808441    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem --> /usr/share/ca-certificates/4356.pem (1338 bytes)
	I0229 19:15:11.858007    6464 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /usr/share/ca-certificates/43562.pem (1708 bytes)
	I0229 19:15:11.920890    6464 ssh_runner.go:195] Run: openssl version
	I0229 19:15:11.930436    6464 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0229 19:15:11.939612    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43562.pem && ln -fs /usr/share/ca-certificates/43562.pem /etc/ssl/certs/43562.pem"
	I0229 19:15:11.971278    6464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43562.pem
	I0229 19:15:11.982244    6464 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 29 17:55 /usr/share/ca-certificates/43562.pem
	I0229 19:15:11.982570    6464 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:55 /usr/share/ca-certificates/43562.pem
	I0229 19:15:11.992023    6464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43562.pem
	I0229 19:15:12.001365    6464 command_runner.go:130] > 3ec20f2e
	I0229 19:15:12.010560    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43562.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 19:15:12.043241    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 19:15:12.071406    6464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:15:12.079193    6464 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:15:12.079349    6464 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:15:12.087315    6464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:15:12.097087    6464 command_runner.go:130] > b5213941
	I0229 19:15:12.106116    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 19:15:12.136122    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4356.pem && ln -fs /usr/share/ca-certificates/4356.pem /etc/ssl/certs/4356.pem"
	I0229 19:15:12.166606    6464 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4356.pem
	I0229 19:15:12.173613    6464 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 29 17:55 /usr/share/ca-certificates/4356.pem
	I0229 19:15:12.174012    6464 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:55 /usr/share/ca-certificates/4356.pem
	I0229 19:15:12.181656    6464 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4356.pem
	I0229 19:15:12.190266    6464 command_runner.go:130] > 51391683
	I0229 19:15:12.199306    6464 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4356.pem /etc/ssl/certs/51391683.0"
	I0229 19:15:12.229646    6464 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 19:15:12.235864    6464 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 19:15:12.236227    6464 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 19:15:12.243203    6464 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 19:15:12.278720    6464 command_runner.go:130] > cgroupfs
	I0229 19:15:12.278976    6464 cni.go:84] Creating CNI manager for ""
	I0229 19:15:12.279033    6464 cni.go:136] 3 nodes found, recommending kindnet
	I0229 19:15:12.279033    6464 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 19:15:12.279158    6464 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.59.9 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-421600 NodeName:multinode-421600-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.52.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.59.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0229 19:15:12.279477    6464 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.59.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-421600-m03"
	  kubeletExtraArgs:
	    node-ip: 172.26.59.9
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.52.109"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 19:15:12.279579    6464 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-421600-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.59.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-421600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 19:15:12.293350    6464 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0229 19:15:12.313969    6464 command_runner.go:130] > kubeadm
	I0229 19:15:12.313969    6464 command_runner.go:130] > kubectl
	I0229 19:15:12.313969    6464 command_runner.go:130] > kubelet
	I0229 19:15:12.313969    6464 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 19:15:12.324529    6464 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0229 19:15:12.343554    6464 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0229 19:15:12.374624    6464 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 19:15:12.417465    6464 ssh_runner.go:195] Run: grep 172.26.52.109	control-plane.minikube.internal$ /etc/hosts
	I0229 19:15:12.423497    6464 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.52.109	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:15:12.445709    6464 host.go:66] Checking if "multinode-421600" exists ...
	I0229 19:15:12.446323    6464 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:15:12.446323    6464 start.go:304] JoinCluster: &{Name:multinode-421600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-421600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.26.52.109 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.26.62.204 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.26.59.9 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:15:12.446517    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0229 19:15:12.446580    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:15:14.420843    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:15:14.421316    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:15:14.421388    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:15:16.795928    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:15:16.796019    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:15:16.796139    6464 sshutil.go:53] new ssh client: &{IP:172.26.52.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 19:15:16.995357    6464 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token lonnlh.n40evrz3232q76sy --discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e 
	I0229 19:15:16.995452    6464 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.5486822s)
	I0229 19:15:16.995452    6464 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.26.59.9 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 19:15:16.995452    6464 host.go:66] Checking if "multinode-421600" exists ...
	I0229 19:15:17.005812    6464 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-421600-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0229 19:15:17.005812    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:15:18.990419    6464 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:15:18.990547    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:15:18.990634    6464 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:15:21.364006    6464 main.go:141] libmachine: [stdout =====>] : 172.26.52.109
	
	I0229 19:15:21.364006    6464 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:15:21.364006    6464 sshutil.go:53] new ssh client: &{IP:172.26.52.109 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 19:15:21.578551    6464 command_runner.go:130] > node/multinode-421600-m03 cordoned
	I0229 19:15:21.597627    6464 command_runner.go:130] > node/multinode-421600-m03 drained
	I0229 19:15:21.600415    6464 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0229 19:15:21.600522    6464 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-7nzdd, kube-system/kube-proxy-rhg8l
	I0229 19:15:21.600522    6464 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-421600-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (4.5944555s)
	I0229 19:15:21.600522    6464 node.go:108] successfully drained node "m03"
	I0229 19:15:21.601594    6464 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:15:21.602298    6464 kapi.go:59] client config for multinode-421600: &rest.Config{Host:"https://172.26.52.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 19:15:21.603031    6464 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0229 19:15:21.603145    6464 round_trippers.go:463] DELETE https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:21.603145    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:21.603145    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:21.603145    6464 round_trippers.go:473]     Content-Type: application/json
	I0229 19:15:21.603145    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:21.621448    6464 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0229 19:15:21.621448    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:21.621448    6464 round_trippers.go:580]     Audit-Id: fd94e213-9c63-47e4-b697-36d4ece5b710
	I0229 19:15:21.621448    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:21.621448    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:21.621448    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:21.621448    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:21.621448    6464 round_trippers.go:580]     Content-Length: 171
	I0229 19:15:21.621448    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:21 GMT
	I0229 19:15:21.622014    6464 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-421600-m03","kind":"nodes","uid":"096122b8-0719-4361-9b63-57130df92d29"}}
	I0229 19:15:21.622014    6464 node.go:124] successfully deleted node "m03"
	I0229 19:15:21.622014    6464 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.26.59.9 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 19:15:21.622014    6464 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.26.59.9 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 19:15:21.622159    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lonnlh.n40evrz3232q76sy --discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-421600-m03"
	I0229 19:15:21.947555    6464 command_runner.go:130] ! W0229 19:15:22.117787    1317 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0229 19:15:22.587967    6464 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:15:24.432528    6464 command_runner.go:130] > [preflight] Running pre-flight checks
	I0229 19:15:24.432528    6464 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0229 19:15:24.432528    6464 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0229 19:15:24.432528    6464 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:15:24.432528    6464 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:15:24.432528    6464 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0229 19:15:24.432528    6464 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0229 19:15:24.432528    6464 command_runner.go:130] > This node has joined the cluster:
	I0229 19:15:24.433559    6464 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0229 19:15:24.433559    6464 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0229 19:15:24.433559    6464 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0229 19:15:24.433620    6464 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lonnlh.n40evrz3232q76sy --discovery-token-ca-cert-hash sha256:cee10ebbc824bfc36c0d81f93293570211b0e6bda8098cea612d080b286ee20e --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-421600-m03": (2.8113047s)
	I0229 19:15:24.433665    6464 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0229 19:15:24.660734    6464 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0229 19:15:24.891303    6464 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19 minikube.k8s.io/name=multinode-421600 minikube.k8s.io/updated_at=2024_02_29T19_15_24_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0229 19:15:25.041656    6464 command_runner.go:130] > node/multinode-421600-m02 labeled
	I0229 19:15:25.041656    6464 command_runner.go:130] > node/multinode-421600-m03 labeled
	I0229 19:15:25.041781    6464 start.go:306] JoinCluster complete in 12.5947575s
	I0229 19:15:25.041781    6464 cni.go:84] Creating CNI manager for ""
	I0229 19:15:25.041893    6464 cni.go:136] 3 nodes found, recommending kindnet
	I0229 19:15:25.051059    6464 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0229 19:15:25.059659    6464 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0229 19:15:25.059909    6464 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0229 19:15:25.059909    6464 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0229 19:15:25.059909    6464 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0229 19:15:25.059969    6464 command_runner.go:130] > Access: 2024-02-29 19:09:50.700291300 +0000
	I0229 19:15:25.059969    6464 command_runner.go:130] > Modify: 2024-02-23 03:39:37.000000000 +0000
	I0229 19:15:25.059969    6464 command_runner.go:130] > Change: 2024-02-29 19:09:39.251000000 +0000
	I0229 19:15:25.059969    6464 command_runner.go:130] >  Birth: -
	I0229 19:15:25.060099    6464 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0229 19:15:25.060142    6464 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0229 19:15:25.109894    6464 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0229 19:15:25.514824    6464 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0229 19:15:25.514893    6464 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0229 19:15:25.514893    6464 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0229 19:15:25.514893    6464 command_runner.go:130] > daemonset.apps/kindnet configured
	I0229 19:15:25.516048    6464 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:15:25.517628    6464 kapi.go:59] client config for multinode-421600: &rest.Config{Host:"https://172.26.52.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 19:15:25.518192    6464 round_trippers.go:463] GET https://172.26.52.109:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0229 19:15:25.518192    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:25.518192    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:25.518192    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:25.524065    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:15:25.524132    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:25.524132    6464 round_trippers.go:580]     Audit-Id: 7826b6ba-42b9-42c9-877d-713e20ff4ae1
	I0229 19:15:25.524132    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:25.524132    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:25.524132    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:25.524132    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:25.524132    6464 round_trippers.go:580]     Content-Length: 292
	I0229 19:15:25.524132    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:25 GMT
	I0229 19:15:25.524132    6464 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b9192a23-827d-4114-8861-df907bfdc0ef","resourceVersion":"1689","creationTimestamp":"2024-02-29T18:50:38Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0229 19:15:25.524132    6464 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-421600" context rescaled to 1 replicas
	I0229 19:15:25.524132    6464 start.go:223] Will wait 6m0s for node &{Name:m03 IP:172.26.59.9 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0229 19:15:25.525011    6464 out.go:177] * Verifying Kubernetes components...
	I0229 19:15:25.534336    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:15:25.561361    6464 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:15:25.562732    6464 kapi.go:59] client config for multinode-421600: &rest.Config{Host:"https://172.26.52.109:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-421600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 19:15:25.563055    6464 node_ready.go:35] waiting up to 6m0s for node "multinode-421600-m03" to be "Ready" ...
	I0229 19:15:25.563595    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:25.563723    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:25.563723    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:25.563723    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:25.569587    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:15:25.569587    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:25.569587    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:25 GMT
	I0229 19:15:25.570131    6464 round_trippers.go:580]     Audit-Id: f5bd8ff8-0d63-48fc-a073-53dc5af11606
	I0229 19:15:25.570131    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:25.570131    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:25.570131    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:25.570131    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:25.570377    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:26.077559    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:26.077559    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:26.077559    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:26.077559    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:26.081972    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:26.081972    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:26.081972    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:26.081972    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:26 GMT
	I0229 19:15:26.081972    6464 round_trippers.go:580]     Audit-Id: cd4e391e-24b7-427f-b845-a5312da9c4ec
	I0229 19:15:26.081972    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:26.081972    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:26.081972    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:26.082339    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:26.578245    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:26.578614    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:26.578614    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:26.578614    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:26.582787    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:26.582832    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:26.582832    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:26.582832    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:26 GMT
	I0229 19:15:26.582832    6464 round_trippers.go:580]     Audit-Id: 58b54239-0912-4c08-842b-5a586008fc08
	I0229 19:15:26.582832    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:26.582832    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:26.582832    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:26.583014    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:27.079962    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:27.079962    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:27.079962    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:27.079962    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:27.083984    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:27.083984    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:27.083984    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:27.083984    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:27 GMT
	I0229 19:15:27.083984    6464 round_trippers.go:580]     Audit-Id: 745ee3dc-ae00-4e09-9b9a-398865787354
	I0229 19:15:27.083984    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:27.083984    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:27.084232    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:27.084371    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:27.567205    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:27.567458    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:27.567458    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:27.567458    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:27.571796    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:27.571796    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:27.571796    6464 round_trippers.go:580]     Audit-Id: 0bbee1d7-db76-4be5-adf9-52e4890b4596
	I0229 19:15:27.571796    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:27.571796    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:27.571796    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:27.571796    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:27.571796    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:27 GMT
	I0229 19:15:27.571796    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:27.571796    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:28.071325    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:28.071519    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:28.071519    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:28.071519    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:28.074921    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:28.075791    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:28.075791    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:28 GMT
	I0229 19:15:28.075791    6464 round_trippers.go:580]     Audit-Id: 5d1928c3-6218-419d-9bb9-e24d06d2dd9d
	I0229 19:15:28.075871    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:28.075871    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:28.075871    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:28.075871    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:28.076259    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:28.574665    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:28.574755    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:28.574847    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:28.574847    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:28.579114    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:28.579114    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:28.579114    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:28.579114    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:28.579114    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:28.579114    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:28.579114    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:28 GMT
	I0229 19:15:28.579114    6464 round_trippers.go:580]     Audit-Id: 78c06769-8af7-41ff-98d4-078657980790
	I0229 19:15:28.579114    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:29.077188    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:29.077188    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:29.077188    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:29.077188    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:29.082081    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:29.082081    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:29.082081    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:29.082674    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:29.082674    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:29 GMT
	I0229 19:15:29.082674    6464 round_trippers.go:580]     Audit-Id: 87a3efa4-72f7-4b7e-b61e-d6bda059ae31
	I0229 19:15:29.082674    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:29.082674    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:29.082767    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:29.579396    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:29.579396    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:29.579396    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:29.579396    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:29.583919    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:29.583919    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:29.584064    6464 round_trippers.go:580]     Audit-Id: ef60fecd-603a-4834-b68f-5ebad1a51e0d
	I0229 19:15:29.584064    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:29.584064    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:29.584064    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:29.584064    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:29.584064    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:29 GMT
	I0229 19:15:29.584256    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:29.585246    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:30.068931    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:30.068980    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:30.068980    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:30.068980    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:30.073120    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:30.073120    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:30.073120    6464 round_trippers.go:580]     Audit-Id: 94a08ff6-1496-4d23-829b-ece756df0a37
	I0229 19:15:30.073120    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:30.073120    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:30.073120    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:30.073244    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:30.073244    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:30 GMT
	I0229 19:15:30.073636    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:30.570909    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:30.571253    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:30.571253    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:30.571253    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:30.575780    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:30.575853    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:30.575853    6464 round_trippers.go:580]     Audit-Id: 300b293f-707c-407d-97a0-56e65d6cc184
	I0229 19:15:30.575853    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:30.575853    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:30.575853    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:30.575853    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:30.575853    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:30 GMT
	I0229 19:15:30.575853    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:31.073527    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:31.073639    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:31.073639    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:31.073639    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:31.078691    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:15:31.078691    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:31.078691    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:31.078691    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:31.078691    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:31 GMT
	I0229 19:15:31.078691    6464 round_trippers.go:580]     Audit-Id: 95f24da7-341f-4957-a1f5-8fed848cb484
	I0229 19:15:31.078691    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:31.078691    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:31.079059    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:31.576327    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:31.576424    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:31.576493    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:31.576493    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:31.581171    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:31.581171    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:31.581171    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:31.581171    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:31.581171    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:31 GMT
	I0229 19:15:31.581171    6464 round_trippers.go:580]     Audit-Id: ac0af225-2760-4b35-b44c-2b655bb30c51
	I0229 19:15:31.581171    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:31.581171    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:31.581671    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:32.077561    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:32.077561    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:32.077561    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:32.077670    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:32.083990    6464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 19:15:32.083990    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:32.084531    6464 round_trippers.go:580]     Audit-Id: b4508365-6495-4f8d-8c85-2824c976a5d4
	I0229 19:15:32.084531    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:32.084531    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:32.084531    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:32.084531    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:32.084655    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:32 GMT
	I0229 19:15:32.084712    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:32.085417    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:32.577963    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:32.577963    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:32.578087    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:32.578087    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:32.581402    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:32.581402    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:32.581402    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:32.581402    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:32 GMT
	I0229 19:15:32.582391    6464 round_trippers.go:580]     Audit-Id: 95683898-2c53-4b19-90bc-b0fa21833377
	I0229 19:15:32.582391    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:32.582391    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:32.582391    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:32.582563    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:33.078778    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:33.078876    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:33.078876    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:33.078876    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:33.082595    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:33.082595    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:33.082595    6464 round_trippers.go:580]     Audit-Id: 1481228c-b6e2-4083-8fd4-6cd5d31012d1
	I0229 19:15:33.082595    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:33.082595    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:33.082595    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:33.082595    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:33.083121    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:33 GMT
	I0229 19:15:33.083294    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:33.566779    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:33.566886    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:33.566886    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:33.566941    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:33.571019    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:33.571091    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:33.571091    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:33.571091    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:33.571091    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:33 GMT
	I0229 19:15:33.571091    6464 round_trippers.go:580]     Audit-Id: 4424ad4b-6483-4aae-93df-69ab2bb7ca15
	I0229 19:15:33.571091    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:33.571091    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:33.571376    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2006","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3496 chars]
	I0229 19:15:34.069528    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:34.069592    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:34.069669    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:34.069669    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:34.073236    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:34.073236    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:34.073236    6464 round_trippers.go:580]     Audit-Id: 697a1e7a-c9d7-4c80-be1b-e615f0366639
	I0229 19:15:34.073236    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:34.073236    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:34.073236    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:34.073236    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:34.073236    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:34 GMT
	I0229 19:15:34.073823    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:34.570596    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:34.570596    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:34.570596    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:34.570596    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:34.574583    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:34.575358    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:34.575358    6464 round_trippers.go:580]     Audit-Id: 53eef598-0af0-4b39-b0c5-1a34ecd6603f
	I0229 19:15:34.575358    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:34.575358    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:34.575358    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:34.575358    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:34.575460    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:34 GMT
	I0229 19:15:34.575721    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:34.576098    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:35.071344    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:35.071344    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:35.071344    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:35.071344    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:35.075719    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:35.075719    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:35.075719    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:35.075719    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:35.075719    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:35.075719    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:35 GMT
	I0229 19:15:35.075719    6464 round_trippers.go:580]     Audit-Id: c81c14fe-73cd-4789-84c0-b5284447755a
	I0229 19:15:35.075719    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:35.076565    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:35.574938    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:35.574938    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:35.574938    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:35.574938    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:35.578486    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:35.579378    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:35.579378    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:35.579378    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:35.579378    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:35.579441    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:35 GMT
	I0229 19:15:35.579441    6464 round_trippers.go:580]     Audit-Id: 1f856288-cc92-40ad-b971-a57895c0fa0a
	I0229 19:15:35.579441    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:35.579441    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:36.077669    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:36.077726    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:36.077726    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:36.077782    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:36.082358    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:36.082421    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:36.082421    6464 round_trippers.go:580]     Audit-Id: a499219d-058a-490e-9aae-eebd8a8cb87b
	I0229 19:15:36.082421    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:36.082421    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:36.082421    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:36.082421    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:36.082421    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:36 GMT
	I0229 19:15:36.082505    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:36.576248    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:36.576248    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:36.576248    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:36.576248    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:36.579805    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:36.580540    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:36.580540    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:36 GMT
	I0229 19:15:36.580540    6464 round_trippers.go:580]     Audit-Id: 969e84c2-0417-44fb-9cfd-4430ccbaacc1
	I0229 19:15:36.580540    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:36.580540    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:36.580540    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:36.580540    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:36.580699    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:36.581139    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:37.074754    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:37.074849    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:37.074849    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:37.074849    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:37.079466    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:37.079466    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:37.079913    6464 round_trippers.go:580]     Audit-Id: be439cb7-10b3-45ef-b3ea-06fc40299415
	I0229 19:15:37.079913    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:37.079913    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:37.079913    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:37.079913    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:37.079913    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:37 GMT
	I0229 19:15:37.080147    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:37.578142    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:37.578249    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:37.578249    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:37.578249    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:37.582641    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:37.582641    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:37.582641    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:37.582641    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:37.582641    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:37.582641    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:37 GMT
	I0229 19:15:37.582641    6464 round_trippers.go:580]     Audit-Id: da659452-3bd2-4e92-b1cd-36e745077f74
	I0229 19:15:37.582641    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:37.583187    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:38.077099    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:38.077099    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:38.077099    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:38.077099    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:38.085225    6464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 19:15:38.085270    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:38.085270    6464 round_trippers.go:580]     Audit-Id: e6fb3f6d-80d2-40af-b655-9781aa2dd01b
	I0229 19:15:38.085325    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:38.085325    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:38.085325    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:38.085325    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:38.085325    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:38 GMT
	I0229 19:15:38.085468    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:38.569798    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:38.569798    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:38.569798    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:38.569798    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:38.573199    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:38.573199    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:38.574177    6464 round_trippers.go:580]     Audit-Id: 8b89829b-a4e2-4d41-8356-e832db420496
	I0229 19:15:38.574177    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:38.574177    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:38.574177    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:38.574177    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:38.574177    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:38 GMT
	I0229 19:15:38.574177    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:39.075867    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:39.075867    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:39.075867    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:39.075867    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:39.079081    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:39.079431    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:39.079431    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:39.079431    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:39.079431    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:39.079431    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:39 GMT
	I0229 19:15:39.079431    6464 round_trippers.go:580]     Audit-Id: 52393a6e-3ea0-4ad7-8a7e-d63e4a82ac4c
	I0229 19:15:39.079431    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:39.079431    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:39.079966    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:39.566553    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:39.566696    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:39.566696    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:39.566696    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:39.572564    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:15:39.572564    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:39.572564    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:39.572797    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:39.572797    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:39.572797    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:39 GMT
	I0229 19:15:39.572797    6464 round_trippers.go:580]     Audit-Id: e258fbdb-975e-403e-a936-fda2e97c4052
	I0229 19:15:39.572797    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:39.573430    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:40.073085    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:40.073085    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:40.073184    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:40.073184    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:40.077239    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:40.077330    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:40.077330    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:40.077330    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:40.077330    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:40.077330    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:40.077330    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:40 GMT
	I0229 19:15:40.077330    6464 round_trippers.go:580]     Audit-Id: 81efd15b-dd0b-4b6f-860d-e45cc8743bbd
	I0229 19:15:40.077579    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:40.574120    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:40.574204    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:40.574204    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:40.574204    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:40.577965    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:40.577965    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:40.577965    6464 round_trippers.go:580]     Audit-Id: 65092f5d-03be-4043-8d83-cb410762a9a6
	I0229 19:15:40.577965    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:40.577965    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:40.577965    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:40.577965    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:40.578180    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:40 GMT
	I0229 19:15:40.578797    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:41.073636    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:41.073636    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:41.073636    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:41.073636    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:41.078368    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:41.078368    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:41.078368    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:41.078368    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:41.078368    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:41.078368    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:41 GMT
	I0229 19:15:41.078368    6464 round_trippers.go:580]     Audit-Id: c55d2ea8-4664-4ef0-a4ee-fa6199e0eff7
	I0229 19:15:41.078368    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:41.079388    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:41.570616    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:41.570800    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:41.570800    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:41.570800    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:41.574456    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:41.574886    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:41.574886    6464 round_trippers.go:580]     Audit-Id: 91f12f3f-c2eb-40af-8697-85b4091936c3
	I0229 19:15:41.574886    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:41.574886    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:41.574886    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:41.574886    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:41.574886    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:41 GMT
	I0229 19:15:41.574886    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:41.575476    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:42.073021    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:42.073021    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:42.073092    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:42.073092    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:42.083527    6464 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0229 19:15:42.083659    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:42.083659    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:42.083659    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:42.083659    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:42 GMT
	I0229 19:15:42.083721    6464 round_trippers.go:580]     Audit-Id: 9a949f82-b941-46e7-8438-3134093619b8
	I0229 19:15:42.083721    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:42.083721    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:42.084055    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:42.573710    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:42.573898    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:42.573898    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:42.573898    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:42.576937    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:42.576937    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:42.576937    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:42.576937    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:42.576937    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:42.576937    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:42 GMT
	I0229 19:15:42.576937    6464 round_trippers.go:580]     Audit-Id: 63358ea9-a898-467b-8fc4-1659f259ef39
	I0229 19:15:42.576937    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:42.577999    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:43.075722    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:43.075722    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:43.075722    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:43.075722    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:43.080084    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:43.080084    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:43.080084    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:43.080084    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:43.080084    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:43.080084    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:43 GMT
	I0229 19:15:43.080084    6464 round_trippers.go:580]     Audit-Id: 9186b28a-8b92-45d6-bf3b-615dd2ac42b3
	I0229 19:15:43.080084    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:43.080445    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:43.577758    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:43.577829    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:43.577901    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:43.577901    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:43.581254    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:43.582150    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:43.582150    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:43.582150    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:43 GMT
	I0229 19:15:43.582150    6464 round_trippers.go:580]     Audit-Id: 2d7d605f-bd75-41cd-859e-4bd38df91ec8
	I0229 19:15:43.582150    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:43.582150    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:43.582150    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:43.582349    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:43.582788    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:44.065302    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:44.065302    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:44.065302    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:44.065302    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:44.070539    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:15:44.070539    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:44.070539    6464 round_trippers.go:580]     Audit-Id: 8b65de7c-a365-4e45-9fcf-fb96ec7c99c9
	I0229 19:15:44.070539    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:44.070539    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:44.070539    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:44.070539    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:44.070539    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:44 GMT
	I0229 19:15:44.071074    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:44.580234    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:44.580234    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:44.580234    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:44.580234    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:44.583821    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:44.584523    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:44.584523    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:44.584523    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:44 GMT
	I0229 19:15:44.584523    6464 round_trippers.go:580]     Audit-Id: 5d720cd2-2b9a-4bba-9a07-16d3ff678f81
	I0229 19:15:44.584523    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:44.584523    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:44.584523    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:44.584812    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:45.080068    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:45.080068    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:45.080068    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:45.080068    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:45.086061    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:45.086061    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:45.086161    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:45.086161    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:45.086161    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:45.086161    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:45 GMT
	I0229 19:15:45.086161    6464 round_trippers.go:580]     Audit-Id: 6047275d-a4ba-4edf-ba52-70de8085d59d
	I0229 19:15:45.086161    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:45.086334    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:45.565682    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:45.565912    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:45.565912    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:45.565912    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:45.569999    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:45.570180    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:45.570180    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:45.570180    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:45.570180    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:45.570180    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:45 GMT
	I0229 19:15:45.570180    6464 round_trippers.go:580]     Audit-Id: c81de8bc-b39a-462a-beab-8302c2209e15
	I0229 19:15:45.570180    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:45.570405    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:46.069351    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:46.069351    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:46.069414    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:46.069414    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:46.072717    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:46.073515    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:46.073515    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:46.073515    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:46.073515    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:46.073515    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:46 GMT
	I0229 19:15:46.073515    6464 round_trippers.go:580]     Audit-Id: 5f46528b-aa3a-4eae-82da-54693675eeac
	I0229 19:15:46.073515    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:46.074017    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:46.074567    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:46.572641    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:46.572641    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:46.572641    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:46.572641    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:46.576135    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:46.576135    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:46.576135    6464 round_trippers.go:580]     Audit-Id: 589f5b19-dc29-4c1a-a988-3efd937b4bd4
	I0229 19:15:46.576135    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:46.576135    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:46.576135    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:46.576135    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:46.576135    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:46 GMT
	I0229 19:15:46.576820    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:47.074669    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:47.074669    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:47.074669    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:47.074669    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:47.078312    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:47.079363    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:47.079363    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:47.079363    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:47.079363    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:47 GMT
	I0229 19:15:47.079432    6464 round_trippers.go:580]     Audit-Id: c871331e-5ffc-483a-86b1-5729fe45d7d6
	I0229 19:15:47.079432    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:47.079432    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:47.079432    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:47.577404    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:47.577404    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:47.577404    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:47.577404    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:47.580991    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:47.580991    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:47.580991    6464 round_trippers.go:580]     Audit-Id: f2be86e4-3894-455a-9a1b-f0f9ff750fc6
	I0229 19:15:47.580991    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:47.580991    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:47.580991    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:47.580991    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:47.580991    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:47 GMT
	I0229 19:15:47.581994    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:48.080466    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:48.080546    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:48.080546    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:48.080546    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:48.087025    6464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 19:15:48.087025    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:48.087025    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:48.087025    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:48.087025    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:48 GMT
	I0229 19:15:48.087025    6464 round_trippers.go:580]     Audit-Id: 0caa341b-b3fe-4858-88bd-cd70c26c3048
	I0229 19:15:48.087025    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:48.087025    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:48.087025    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:48.088403    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:48.565334    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:48.565418    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:48.565418    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:48.565418    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:48.568717    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:48.569717    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:48.569717    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:48.569717    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:48.569759    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:48.569759    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:48 GMT
	I0229 19:15:48.569759    6464 round_trippers.go:580]     Audit-Id: eb4fffc5-dbfa-415a-acf3-ff547f22e4d9
	I0229 19:15:48.569759    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:48.570208    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:49.079623    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:49.079721    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:49.079721    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:49.079721    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:49.084415    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:49.084415    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:49.084415    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:49.084415    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:49.084415    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:49.084415    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:49 GMT
	I0229 19:15:49.084415    6464 round_trippers.go:580]     Audit-Id: 96448057-2310-473a-b881-94db2d62b324
	I0229 19:15:49.084415    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:49.084887    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:49.567990    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:49.568182    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:49.568182    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:49.568182    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:49.571840    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:49.571840    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:49.572847    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:49.572847    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:49 GMT
	I0229 19:15:49.572847    6464 round_trippers.go:580]     Audit-Id: ee6a2f76-8afa-4e61-b992-4fb49808e73d
	I0229 19:15:49.572847    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:49.572847    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:49.572847    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:49.573008    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:50.070547    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:50.070547    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:50.070547    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:50.070547    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:50.074563    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:50.074563    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:50.074563    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:50.074563    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:50.074563    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:50.074563    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:50 GMT
	I0229 19:15:50.074563    6464 round_trippers.go:580]     Audit-Id: 4f69be19-5dda-4794-b8f9-18c67216f2e9
	I0229 19:15:50.074563    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:50.075029    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:50.572341    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:50.572435    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:50.572435    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:50.572435    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:50.576682    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:50.576960    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:50.576960    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:50.576960    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:50.576960    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:50.576960    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:50.576960    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:50 GMT
	I0229 19:15:50.577075    6464 round_trippers.go:580]     Audit-Id: 42e644ff-b2bb-4214-94e7-46113cfa6f2c
	I0229 19:15:50.577196    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:50.577937    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:51.072751    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:51.072751    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:51.072903    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:51.072903    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:51.079606    6464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 19:15:51.079606    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:51.079606    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:51 GMT
	I0229 19:15:51.079606    6464 round_trippers.go:580]     Audit-Id: a5100eb9-b6cf-4d6e-b455-ef1c69b1873e
	I0229 19:15:51.079606    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:51.079606    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:51.079606    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:51.079606    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:51.079606    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:51.574643    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:51.574922    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:51.574922    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:51.574922    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:51.579098    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:51.579098    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:51.579098    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:51 GMT
	I0229 19:15:51.579098    6464 round_trippers.go:580]     Audit-Id: 2e52bdeb-9d32-43bf-85ca-84b2a5184d83
	I0229 19:15:51.579098    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:51.579098    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:51.579098    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:51.579098    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:51.579833    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:52.076396    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:52.076396    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:52.076508    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:52.076508    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:52.082904    6464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 19:15:52.083378    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:52.083378    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:52.083378    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:52.083378    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:52.083378    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:52 GMT
	I0229 19:15:52.083378    6464 round_trippers.go:580]     Audit-Id: 5c3cce9d-f897-4c4c-9594-5cd2cba4c038
	I0229 19:15:52.083459    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:52.083529    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:52.577030    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:52.577030    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:52.577030    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:52.577030    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:52.581076    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:52.581076    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:52.581076    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:52.581076    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:52 GMT
	I0229 19:15:52.581076    6464 round_trippers.go:580]     Audit-Id: f1c0d67b-91b6-4887-a7e7-57a5b0dcb212
	I0229 19:15:52.581076    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:52.581076    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:52.581076    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:52.581314    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:52.581604    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:53.077220    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:53.077439    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:53.077523    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:53.077564    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:53.083084    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:15:53.083084    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:53.083084    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:53.083084    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:53.083084    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:53.083084    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:53.083084    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:53 GMT
	I0229 19:15:53.083084    6464 round_trippers.go:580]     Audit-Id: 6be50244-64e7-44c6-bfdc-baeec2323d33
	I0229 19:15:53.083614    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:53.579925    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:53.580009    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:53.580009    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:53.580009    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:53.583747    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:53.583747    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:53.583747    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:53 GMT
	I0229 19:15:53.583747    6464 round_trippers.go:580]     Audit-Id: dc237781-9f99-4451-bf7d-aefc92329a52
	I0229 19:15:53.583747    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:53.583747    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:53.583747    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:53.583747    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:53.584395    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:54.066033    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:54.066164    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:54.066164    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:54.066164    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:54.070605    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:54.071056    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:54.071056    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:54.071056    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:54.071056    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:54.071056    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:54.071056    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:54 GMT
	I0229 19:15:54.071056    6464 round_trippers.go:580]     Audit-Id: e3f6bae8-3263-4caf-96a4-073cbc2d65b5
	I0229 19:15:54.071413    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:54.579379    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:54.579379    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:54.579379    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:54.579379    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:54.582993    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:54.583312    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:54.583312    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:54.583312    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:54.583312    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:54.583312    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:54.583312    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:54 GMT
	I0229 19:15:54.583312    6464 round_trippers.go:580]     Audit-Id: c2001eac-1a16-44fb-9cf2-dae6ec23c6a5
	I0229 19:15:54.583523    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:54.584217    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:55.079635    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:55.079635    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:55.079635    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:55.079717    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:55.083980    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:55.084925    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:55.084965    6464 round_trippers.go:580]     Audit-Id: 7c152e40-6f4e-4b6d-85d4-05e38e125b63
	I0229 19:15:55.084965    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:55.085052    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:55.085052    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:55.085052    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:55.085052    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:55 GMT
	I0229 19:15:55.085392    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:55.577548    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:55.577548    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:55.577640    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:55.577640    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:55.580873    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:55.580873    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:55.581639    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:55 GMT
	I0229 19:15:55.581639    6464 round_trippers.go:580]     Audit-Id: 6025a45f-8d15-41dc-81a1-5546b64e8a4d
	I0229 19:15:55.581639    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:55.581639    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:55.581639    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:55.581639    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:55.581923    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:56.067750    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:56.067818    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:56.067896    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:56.067896    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:56.070864    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:15:56.070864    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:56.070864    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:56.070864    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:56.070864    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:56 GMT
	I0229 19:15:56.070864    6464 round_trippers.go:580]     Audit-Id: f9285824-a7b0-4a93-9682-52fc8d24a56f
	I0229 19:15:56.070864    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:56.070864    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:56.071935    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:56.569212    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:56.569212    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:56.569212    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:56.569212    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:56.574097    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:56.574097    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:56.574097    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:56.574477    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:56.574477    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:56.574577    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:56 GMT
	I0229 19:15:56.574606    6464 round_trippers.go:580]     Audit-Id: 59ffd712-9cd6-4ca0-933c-2f9d64e42bb1
	I0229 19:15:56.574606    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:56.574784    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:57.070970    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:57.071031    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:57.071031    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:57.071099    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:57.075271    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:57.075519    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:57.075519    6464 round_trippers.go:580]     Audit-Id: ac7fabf6-8f1d-4acd-add7-3c7ed98c3e8a
	I0229 19:15:57.075519    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:57.075519    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:57.075591    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:57.075591    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:57.075591    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:57 GMT
	I0229 19:15:57.075698    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:57.076131    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:57.569920    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:57.570069    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:57.570069    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:57.570069    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:57.573403    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:57.573403    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:57.574418    6464 round_trippers.go:580]     Audit-Id: 51dc6b7b-fcac-46e0-882e-736ded479074
	I0229 19:15:57.574418    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:57.574418    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:57.574418    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:57.574418    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:57.574418    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:57 GMT
	I0229 19:15:57.574470    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:58.068603    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:58.068603    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:58.068603    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:58.068603    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:58.072649    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:58.072649    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:58.072649    6464 round_trippers.go:580]     Audit-Id: 6279df8c-af84-4d2b-942e-cc400e6c314f
	I0229 19:15:58.072649    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:58.072649    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:58.072649    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:58.072649    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:58.072649    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:58 GMT
	I0229 19:15:58.073089    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:58.568184    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:58.568260    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:58.568260    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:58.568260    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:58.572075    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:15:58.572294    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:58.572294    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:58.572294    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:58.572294    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:58 GMT
	I0229 19:15:58.572294    6464 round_trippers.go:580]     Audit-Id: 69b3a70a-771d-4c4c-942a-d94f45d48043
	I0229 19:15:58.572294    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:58.572294    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:58.572467    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:59.071699    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:59.071699    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:59.071699    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:59.071699    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:59.075754    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:59.075754    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:59.075754    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:59.075754    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:59 GMT
	I0229 19:15:59.075754    6464 round_trippers.go:580]     Audit-Id: 51e79708-5f41-439e-88e6-ce5d3e41cd73
	I0229 19:15:59.075754    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:59.075754    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:59.075754    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:59.077224    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:15:59.077224    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:15:59.577281    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:15:59.577281    6464 round_trippers.go:469] Request Headers:
	I0229 19:15:59.577281    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:15:59.577281    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:15:59.581333    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:15:59.581333    6464 round_trippers.go:577] Response Headers:
	I0229 19:15:59.581333    6464 round_trippers.go:580]     Audit-Id: 14e61740-dad0-449e-b762-c468b484975e
	I0229 19:15:59.581333    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:15:59.581333    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:15:59.581333    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:15:59.581333    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:15:59.581333    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:15:59 GMT
	I0229 19:15:59.581973    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:16:00.081293    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:16:00.081389    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:00.081389    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:00.081389    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:00.087817    6464 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0229 19:16:00.087817    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:00.087817    6464 round_trippers.go:580]     Audit-Id: f96cd7e8-553e-4601-af02-0164689f1645
	I0229 19:16:00.087817    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:00.087817    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:00.088127    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:00.088127    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:00.088127    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:00 GMT
	I0229 19:16:00.088503    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:16:00.571328    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:16:00.571531    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:00.571531    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:00.571531    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:00.575342    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:16:00.575342    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:00.575342    6464 round_trippers.go:580]     Audit-Id: 8ea06e9e-6347-4bf4-968f-215507fcf5ab
	I0229 19:16:00.575342    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:00.575342    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:00.575342    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:00.575342    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:00.575342    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:00 GMT
	I0229 19:16:00.576055    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:16:01.070916    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:16:01.070990    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:01.070990    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:01.070990    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:01.076343    6464 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0229 19:16:01.076504    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:01.076591    6464 round_trippers.go:580]     Audit-Id: 5c11e5f3-afbe-455d-9b0b-53bfc1af0f19
	I0229 19:16:01.076591    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:01.076591    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:01.076591    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:01.076591    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:01.076591    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:01 GMT
	I0229 19:16:01.077434    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2020","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3888 chars]
	I0229 19:16:01.078147    6464 node_ready.go:58] node "multinode-421600-m03" has status "Ready":"False"
	I0229 19:16:01.574022    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:16:01.574022    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:01.574251    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:01.574251    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:01.577698    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:16:01.577698    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:01.577698    6464 round_trippers.go:580]     Audit-Id: 30e83764-de93-4597-99bf-0552536e9d89
	I0229 19:16:01.577698    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:01.577698    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:01.577698    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:01.578666    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:01.578666    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:01 GMT
	I0229 19:16:01.578769    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2056","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3754 chars]
	I0229 19:16:01.578769    6464 node_ready.go:49] node "multinode-421600-m03" has status "Ready":"True"
	I0229 19:16:01.578769    6464 node_ready.go:38] duration metric: took 36.0131714s waiting for node "multinode-421600-m03" to be "Ready" ...
	I0229 19:16:01.578769    6464 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:16:01.579311    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods
	I0229 19:16:01.579311    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:01.579311    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:01.579311    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:01.587210    6464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 19:16:01.587210    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:01.587210    6464 round_trippers.go:580]     Audit-Id: 96e4e830-faaa-4dc6-b372-96876b55ed96
	I0229 19:16:01.587210    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:01.587210    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:01.587210    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:01.587210    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:01.587210    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:01 GMT
	I0229 19:16:01.589712    6464 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2056"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1685","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82911 chars]
	I0229 19:16:01.593244    6464 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:01.593392    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5qhb2
	I0229 19:16:01.593431    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:01.593431    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:01.593454    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:01.596069    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:16:01.596069    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:01.596069    6464 round_trippers.go:580]     Audit-Id: 1e8ba3b6-d576-4eb7-a3f2-42a6061ad903
	I0229 19:16:01.596069    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:01.596069    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:01.596069    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:01.596069    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:01.596069    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:01 GMT
	I0229 19:16:01.596667    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5qhb2","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"cb647b50-f478-4265-9ff1-b66190c46393","resourceVersion":"1685","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"3eee4899-9868-46e9-9907-7fbe2995bab1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3eee4899-9868-46e9-9907-7fbe2995bab1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6493 chars]
	I0229 19:16:01.597305    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:16:01.597305    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:01.597305    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:01.597305    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:01.599898    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:16:01.599898    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:01.599898    6464 round_trippers.go:580]     Audit-Id: 3413873e-0f5a-4b64-beb8-3a85e0609eb0
	I0229 19:16:01.599898    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:01.599898    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:01.599898    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:01.599898    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:01.599898    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:01 GMT
	I0229 19:16:01.600798    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:16:01.600798    6464 pod_ready.go:92] pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace has status "Ready":"True"
	I0229 19:16:01.600798    6464 pod_ready.go:81] duration metric: took 7.5085ms waiting for pod "coredns-5dd5756b68-5qhb2" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:01.600798    6464 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:01.601393    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-421600
	I0229 19:16:01.601393    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:01.601393    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:01.601393    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:01.604402    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:16:01.604402    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:01.604402    6464 round_trippers.go:580]     Audit-Id: ad879ec4-d594-47e9-8254-01d2b85049de
	I0229 19:16:01.604402    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:01.604402    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:01.604402    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:01.604402    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:01.604402    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:01 GMT
	I0229 19:16:01.604774    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-421600","namespace":"kube-system","uid":"a57a6b03-e79b-4fcd-8750-480d46e6feb7","resourceVersion":"1655","creationTimestamp":"2024-02-29T19:11:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.26.52.109:2379","kubernetes.io/config.hash":"ffd246c3f34c2bcd65e63e05d5465206","kubernetes.io/config.mirror":"ffd246c3f34c2bcd65e63e05d5465206","kubernetes.io/config.seen":"2024-02-29T19:11:04.922860790Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:11:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5863 chars]
	I0229 19:16:01.604859    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:16:01.604859    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:01.605396    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:01.605396    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:01.612696    6464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 19:16:01.612739    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:01.612739    6464 round_trippers.go:580]     Audit-Id: 71b6c6bf-cf46-43ab-8cd9-69d498f3c858
	I0229 19:16:01.612786    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:01.612786    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:01.612786    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:01.612786    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:01.612786    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:01 GMT
	I0229 19:16:01.612890    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:16:01.613570    6464 pod_ready.go:92] pod "etcd-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 19:16:01.613637    6464 pod_ready.go:81] duration metric: took 12.8382ms waiting for pod "etcd-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:01.613699    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:01.613803    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-421600
	I0229 19:16:01.613803    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:01.613868    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:01.613868    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:01.621093    6464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 19:16:01.621369    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:01.621369    6464 round_trippers.go:580]     Audit-Id: 324d856e-b07e-4bbd-a6d1-59ce72d29e58
	I0229 19:16:01.621369    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:01.621369    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:01.621369    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:01.621369    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:01.621369    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:01 GMT
	I0229 19:16:01.621623    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-421600","namespace":"kube-system","uid":"456b1ada-afd0-416c-a95f-71bea88e161d","resourceVersion":"1658","creationTimestamp":"2024-02-29T19:11:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.26.52.109:8443","kubernetes.io/config.hash":"aec335819ecb0b3c60068e2ed02eb80d","kubernetes.io/config.mirror":"aec335819ecb0b3c60068e2ed02eb80d","kubernetes.io/config.seen":"2024-02-29T19:11:04.922862090Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:11:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7400 chars]
	I0229 19:16:01.621843    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:16:01.621843    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:01.621843    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:01.621843    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:01.625511    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:16:01.626457    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:01.626457    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:01.626457    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:01.626457    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:01.626457    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:01.626457    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:01 GMT
	I0229 19:16:01.626457    6464 round_trippers.go:580]     Audit-Id: 3813a7e7-f14a-4f21-996d-e3ccd535383c
	I0229 19:16:01.626621    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:16:01.626896    6464 pod_ready.go:92] pod "kube-apiserver-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 19:16:01.626896    6464 pod_ready.go:81] duration metric: took 13.1966ms waiting for pod "kube-apiserver-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:01.626896    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:01.626896    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-421600
	I0229 19:16:01.626896    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:01.626896    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:01.626896    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:01.629557    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:16:01.629557    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:01.629557    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:01.629557    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:01.629557    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:01.629557    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:01.629557    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:01 GMT
	I0229 19:16:01.629557    6464 round_trippers.go:580]     Audit-Id: c805dd11-e0af-4507-85b8-7ff74b2bf705
	I0229 19:16:01.630847    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-421600","namespace":"kube-system","uid":"a41ee888-f6df-43d4-9799-67a9ef0b6c87","resourceVersion":"1646","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"dabef371df5cd2a8b883d06621dfc6bd","kubernetes.io/config.mirror":"dabef371df5cd2a8b883d06621dfc6bd","kubernetes.io/config.seen":"2024-02-29T18:50:38.626332146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7175 chars]
	I0229 19:16:01.631406    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:16:01.631406    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:01.631406    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:01.631406    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:01.634235    6464 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0229 19:16:01.634235    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:01.634235    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:01.634366    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:01.634366    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:01.634366    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:01.634366    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:01 GMT
	I0229 19:16:01.634366    6464 round_trippers.go:580]     Audit-Id: 20dee095-21d1-4e53-9e67-aaf4a5ddf529
	I0229 19:16:01.634530    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:16:01.634907    6464 pod_ready.go:92] pod "kube-controller-manager-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 19:16:01.634970    6464 pod_ready.go:81] duration metric: took 8.0733ms waiting for pod "kube-controller-manager-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:01.634970    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7c7xc" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:01.776492    6464 request.go:629] Waited for 141.4339ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7c7xc
	I0229 19:16:01.776745    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7c7xc
	I0229 19:16:01.776851    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:01.776851    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:01.776851    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:01.780534    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:16:01.780534    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:01.780934    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:01.780934    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:01.780934    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:01 GMT
	I0229 19:16:01.780934    6464 round_trippers.go:580]     Audit-Id: 00ce05e0-f61b-4685-b7c2-1ce3e0c49336
	I0229 19:16:01.780934    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:01.780934    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:01.781024    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-7c7xc","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f8e7fe9-d8e5-47ca-80fd-7e5f7ae43140","resourceVersion":"1844","creationTimestamp":"2024-02-29T18:53:36Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:53:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0229 19:16:01.981401    6464 request.go:629] Waited for 199.6013ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:16:01.981810    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m02
	I0229 19:16:01.981810    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:01.981810    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:01.981810    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:01.988865    6464 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0229 19:16:01.988865    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:01.988865    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:01.988865    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:01.988865    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:01.988865    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:01.988865    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:02 GMT
	I0229 19:16:01.988865    6464 round_trippers.go:580]     Audit-Id: fa132489-264b-42d5-b5f7-d6cc6047c1e8
	I0229 19:16:01.989821    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m02","uid":"338c4461-534d-49a5-942c-1346a36627e6","resourceVersion":"2005","creationTimestamp":"2024-02-29T19:13:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:13:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3805 chars]
	I0229 19:16:01.989821    6464 pod_ready.go:92] pod "kube-proxy-7c7xc" in "kube-system" namespace has status "Ready":"True"
	I0229 19:16:01.989821    6464 pod_ready.go:81] duration metric: took 354.8316ms waiting for pod "kube-proxy-7c7xc" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:01.989821    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpk6m" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:02.183905    6464 request.go:629] Waited for 193.9404ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpk6m
	I0229 19:16:02.184005    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fpk6m
	I0229 19:16:02.184005    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:02.184005    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:02.184005    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:02.188588    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:16:02.188588    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:02.188951    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:02.188951    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:02.188951    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:02.188951    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:02.188951    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:02 GMT
	I0229 19:16:02.188951    6464 round_trippers.go:580]     Audit-Id: 2226edd1-65a1-4422-a45d-21107427ca22
	I0229 19:16:02.189174    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fpk6m","generateName":"kube-proxy-","namespace":"kube-system","uid":"4c99c6ec-5ab0-434d-b5a9-cb24b10f8bbf","resourceVersion":"1574","creationTimestamp":"2024-02-29T18:50:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0229 19:16:02.386192    6464 request.go:629] Waited for 195.9982ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:16:02.386565    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:16:02.386565    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:02.386565    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:02.386565    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:02.391028    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:16:02.391028    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:02.391028    6464 round_trippers.go:580]     Audit-Id: 4fb087af-2ffb-440f-acce-37f1a6a5d777
	I0229 19:16:02.391028    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:02.391028    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:02.391028    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:02.391141    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:02.391141    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:02 GMT
	I0229 19:16:02.391184    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:16:02.391844    6464 pod_ready.go:92] pod "kube-proxy-fpk6m" in "kube-system" namespace has status "Ready":"True"
	I0229 19:16:02.391951    6464 pod_ready.go:81] duration metric: took 402.1079ms waiting for pod "kube-proxy-fpk6m" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:02.391979    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rhg8l" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:02.588517    6464 request.go:629] Waited for 196.3106ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rhg8l
	I0229 19:16:02.588517    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rhg8l
	I0229 19:16:02.588517    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:02.588762    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:02.588762    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:02.593030    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:16:02.593030    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:02.593030    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:02.593030    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:02.593030    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:02.593030    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:02.593030    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:02 GMT
	I0229 19:16:02.593030    6464 round_trippers.go:580]     Audit-Id: 9689bbb4-5f5d-421f-86f0-bf581cdaa379
	I0229 19:16:02.593649    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rhg8l","generateName":"kube-proxy-","namespace":"kube-system","uid":"58dfdc35-3e50-486d-b7a7-5bae65934cd5","resourceVersion":"2031","creationTimestamp":"2024-02-29T18:57:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a888d9f6-ed77-4118-830b-881d923ceb9c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:57:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a888d9f6-ed77-4118-830b-881d923ceb9c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5739 chars]
	I0229 19:16:02.777875    6464 request.go:629] Waited for 183.5605ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:16:02.777875    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600-m03
	I0229 19:16:02.778051    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:02.778051    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:02.778051    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:02.783190    6464 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0229 19:16:02.783272    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:02.783272    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:02.783344    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:02 GMT
	I0229 19:16:02.783344    6464 round_trippers.go:580]     Audit-Id: d49989a6-35c0-49e7-a6a4-620b016fbfe9
	I0229 19:16:02.783344    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:02.783411    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:02.783411    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:02.783543    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600-m03","uid":"82d779b8-961f-46bf-be2f-fee54b664615","resourceVersion":"2056","creationTimestamp":"2024-02-29T19:15:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_02_29T19_15_24_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T19:15:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3754 chars]
	I0229 19:16:02.784145    6464 pod_ready.go:92] pod "kube-proxy-rhg8l" in "kube-system" namespace has status "Ready":"True"
	I0229 19:16:02.784145    6464 pod_ready.go:81] duration metric: took 392.1449ms waiting for pod "kube-proxy-rhg8l" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:02.784145    6464 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:02.981826    6464 request.go:629] Waited for 197.1379ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-421600
	I0229 19:16:02.982051    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-421600
	I0229 19:16:02.982308    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:02.982308    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:02.982308    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:02.985888    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:16:02.985888    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:02.985888    6464 round_trippers.go:580]     Audit-Id: 6585a7f8-079d-43a6-8fa5-98a18439a8d5
	I0229 19:16:02.985888    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:02.985888    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:02.986201    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:02.986201    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:02.986201    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:03 GMT
	I0229 19:16:02.986201    6464 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-421600","namespace":"kube-system","uid":"6742b97c-a3db-4fca-8da3-54fcde6d405a","resourceVersion":"1669","creationTimestamp":"2024-02-29T18:50:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a2c94c0a4c322f0bf7fcafad0430344f","kubernetes.io/config.mirror":"a2c94c0a4c322f0bf7fcafad0430344f","kubernetes.io/config.seen":"2024-02-29T18:50:38.626333146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-02-29T18:50:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4905 chars]
	I0229 19:16:03.182802    6464 request.go:629] Waited for 195.6419ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:16:03.182802    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes/multinode-421600
	I0229 19:16:03.182802    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:03.182802    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:03.182802    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:03.187419    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:16:03.187419    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:03.187419    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:03 GMT
	I0229 19:16:03.187504    6464 round_trippers.go:580]     Audit-Id: fbc76bdc-c3ac-4295-bbf6-58824c09b597
	I0229 19:16:03.187504    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:03.187504    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:03.187504    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:03.187504    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:03.187618    6464 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-02-29T18:50:35Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0229 19:16:03.188328    6464 pod_ready.go:92] pod "kube-scheduler-multinode-421600" in "kube-system" namespace has status "Ready":"True"
	I0229 19:16:03.188328    6464 pod_ready.go:81] duration metric: took 404.1605ms waiting for pod "kube-scheduler-multinode-421600" in "kube-system" namespace to be "Ready" ...
	I0229 19:16:03.188328    6464 pod_ready.go:38] duration metric: took 1.6094697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0229 19:16:03.188328    6464 system_svc.go:44] waiting for kubelet service to be running ....
	I0229 19:16:03.197084    6464 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:16:03.224409    6464 system_svc.go:56] duration metric: took 35.7469ms WaitForService to wait for kubelet.
	I0229 19:16:03.224409    6464 kubeadm.go:581] duration metric: took 37.6981805s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0229 19:16:03.224409    6464 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:16:03.383345    6464 request.go:629] Waited for 158.8005ms due to client-side throttling, not priority and fairness, request: GET:https://172.26.52.109:8443/api/v1/nodes
	I0229 19:16:03.383936    6464 round_trippers.go:463] GET https://172.26.52.109:8443/api/v1/nodes
	I0229 19:16:03.383936    6464 round_trippers.go:469] Request Headers:
	I0229 19:16:03.383936    6464 round_trippers.go:473]     Accept: application/json, */*
	I0229 19:16:03.384091    6464 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0229 19:16:03.387760    6464 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0229 19:16:03.387760    6464 round_trippers.go:577] Response Headers:
	I0229 19:16:03.387760    6464 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1e326e3e-ee28-4577-8534-7e63ea9ad05e
	I0229 19:16:03.387760    6464 round_trippers.go:580]     Date: Thu, 29 Feb 2024 19:16:03 GMT
	I0229 19:16:03.387760    6464 round_trippers.go:580]     Audit-Id: 97e5d237-f9ea-4f1b-9634-338cf20e88e9
	I0229 19:16:03.387760    6464 round_trippers.go:580]     Cache-Control: no-cache, private
	I0229 19:16:03.387760    6464 round_trippers.go:580]     Content-Type: application/json
	I0229 19:16:03.387760    6464 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c1848a89-c5f5-44af-9cc7-68c6b11ecfd8
	I0229 19:16:03.389233    6464 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2059"},"items":[{"metadata":{"name":"multinode-421600","uid":"e02e78be-e1e8-4683-9673-cc461da56a98","resourceVersion":"1651","creationTimestamp":"2024-02-29T18:50:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-421600","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9e09574fdbf0719156a1e892f7aeb8b71f0cf19","minikube.k8s.io/name":"multinode-421600","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_02_29T18_50_39_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14714 chars]
	I0229 19:16:03.389968    6464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:16:03.389968    6464 node_conditions.go:123] node cpu capacity is 2
	I0229 19:16:03.389968    6464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:16:03.389968    6464 node_conditions.go:123] node cpu capacity is 2
	I0229 19:16:03.389968    6464 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0229 19:16:03.390226    6464 node_conditions.go:123] node cpu capacity is 2
	I0229 19:16:03.390226    6464 node_conditions.go:105] duration metric: took 165.8078ms to run NodePressure ...
	I0229 19:16:03.390226    6464 start.go:228] waiting for startup goroutines ...
	I0229 19:16:03.390226    6464 start.go:242] writing updated cluster config ...
	I0229 19:16:03.400170    6464 ssh_runner.go:195] Run: rm -f paused
	I0229 19:16:03.525974    6464 start.go:601] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0229 19:16:03.527496    6464 out.go:177] * Done! kubectl is now configured to use "multinode-421600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 29 19:11:26 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:26.869528536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 19:11:26 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:26.869570037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:11:26 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:26.869790642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:11:26 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:26.873195015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 19:11:26 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:26.873241316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 19:11:26 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:26.873252517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:11:26 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:26.873325718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:11:27 multinode-421600 cri-dockerd[1207]: time="2024-02-29T19:11:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/31c100ac13a801441b84e1f7a3eb09d267791d3e73d1df9686cca1913333bd13/resolv.conf as [nameserver 172.26.48.1]"
	Feb 29 19:11:27 multinode-421600 cri-dockerd[1207]: time="2024-02-29T19:11:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7a97b43f6ec67b787a826b667945b16c83552b84f5e40b68e2c6d5b5d3c637c1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 29 19:11:27 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:27.323640569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 19:11:27 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:27.323773374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 19:11:27 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:27.323793474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:11:27 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:27.323877977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:11:27 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:27.326422156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 19:11:27 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:27.330022068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 19:11:27 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:27.330356879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:11:27 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:27.333169367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:11:42 multinode-421600 dockerd[1001]: time="2024-02-29T19:11:42.128531464Z" level=info msg="ignoring event" container=8f04da9e9408b7057215ff820df48a6eeb00e33adc5744612357b58a292777d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 29 19:11:42 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:42.129784850Z" level=info msg="shim disconnected" id=8f04da9e9408b7057215ff820df48a6eeb00e33adc5744612357b58a292777d2 namespace=moby
	Feb 29 19:11:42 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:42.129964677Z" level=warning msg="cleaning up after shim disconnected" id=8f04da9e9408b7057215ff820df48a6eeb00e33adc5744612357b58a292777d2 namespace=moby
	Feb 29 19:11:42 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:42.130284025Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 29 19:11:53 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:53.089502869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 19:11:53 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:53.089651492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 19:11:53 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:53.089666494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:11:53 multinode-421600 dockerd[1007]: time="2024-02-29T19:11:53.089762209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	78cf0ffa6d5d5       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   4790eba63978e       storage-provisioner
	8d5ace3f2c96e       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   7a97b43f6ec67       busybox-5b5d89c9d6-4lvtb
	33069159bf8a7       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   1                   31c100ac13a80       coredns-5dd5756b68-5qhb2
	799626a38bfd5       4950bb10b3f87                                                                                         5 minutes ago       Running             kindnet-cni               1                   9ac726668a0da       kindnet-447dh
	8f04da9e9408b       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   4790eba63978e       storage-provisioner
	7e4ebe33d701a       83f6cc407eed8                                                                                         5 minutes ago       Running             kube-proxy                1                   ca0874b4b37af       kube-proxy-fpk6m
	fdbd656584354       7fe0e6f37db33                                                                                         5 minutes ago       Running             kube-apiserver            0                   0d23ce2c7912a       kube-apiserver-multinode-421600
	dad2f1b1d2f07       d058aa5ab969c                                                                                         5 minutes ago       Running             kube-controller-manager   1                   1b30c84566548       kube-controller-manager-multinode-421600
	c6d6e0e1b0fa7       e3db313c6dbc0                                                                                         5 minutes ago       Running             kube-scheduler            1                   d2ed79cff8671       kube-scheduler-multinode-421600
	9344d56bc8a34       73deb9a3f7025                                                                                         5 minutes ago       Running             etcd                      0                   a993dab27bb28       etcd-multinode-421600
	f23bdec6fb5c7       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago      Exited              busybox                   0                   957fdff4fb39a       busybox-5b5d89c9d6-4lvtb
	7be33bccda15c       ead0a4a53df89                                                                                         25 minutes ago      Exited              coredns                   0                   f4d0b06ecf4a6       coredns-5dd5756b68-5qhb2
	92f6a9511f4fe       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago      Exited              kindnet-cni               0                   779c3df146b26       kindnet-447dh
	2f8a25ce65da1       83f6cc407eed8                                                                                         25 minutes ago      Exited              kube-proxy                0                   39324e6654181       kube-proxy-fpk6m
	52fe82a87fa81       d058aa5ab969c                                                                                         25 minutes ago      Exited              kube-controller-manager   0                   d9fcf1cc8d350       kube-controller-manager-multinode-421600
	b8c8786727c5e       e3db313c6dbc0                                                                                         25 minutes ago      Exited              kube-scheduler            0                   2a191aae0ba26       kube-scheduler-multinode-421600
	
	
	==> coredns [33069159bf8a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 09f0998677e0c19d72433bdbc19471218bfe4a8b92405418740861874d1549e73cec4df8f6750d3139464010abec770181315be2b4c8b222ced8b0f05062ec0c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60195 - 24896 "HINFO IN 4469753311152148073.5064160358521336636. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042554969s
	
	
	==> coredns [7be33bccda15] <==
	[INFO] 10.244.1.2:52508 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00016501s
	[INFO] 10.244.1.2:34502 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079905s
	[INFO] 10.244.1.2:38146 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000136708s
	[INFO] 10.244.1.2:47439 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000133208s
	[INFO] 10.244.1.2:59021 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000056503s
	[INFO] 10.244.1.2:39203 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148509s
	[INFO] 10.244.1.2:58216 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071705s
	[INFO] 10.244.0.3:43754 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265615s
	[INFO] 10.244.0.3:60250 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111706s
	[INFO] 10.244.0.3:34465 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072904s
	[INFO] 10.244.0.3:43590 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000143708s
	[INFO] 10.244.1.2:42897 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111607s
	[INFO] 10.244.1.2:33030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160909s
	[INFO] 10.244.1.2:33206 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068004s
	[INFO] 10.244.1.2:45851 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064203s
	[INFO] 10.244.0.3:34007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120507s
	[INFO] 10.244.0.3:52254 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114807s
	[INFO] 10.244.0.3:35961 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00016441s
	[INFO] 10.244.0.3:47154 - 5 "PTR IN 1.48.26.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000110406s
	[INFO] 10.244.1.2:58408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092206s
	[INFO] 10.244.1.2:33917 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000159809s
	[INFO] 10.244.1.2:35059 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000111407s
	[INFO] 10.244.1.2:34636 - 5 "PTR IN 1.48.26.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000072604s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-421600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-421600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=multinode-421600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_29T18_50_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 18:50:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-421600
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 19:16:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 19:11:20 +0000   Thu, 29 Feb 2024 18:50:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 19:11:20 +0000   Thu, 29 Feb 2024 18:50:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 19:11:20 +0000   Thu, 29 Feb 2024 18:50:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 19:11:20 +0000   Thu, 29 Feb 2024 19:11:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.52.109
	  Hostname:    multinode-421600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 3fb2b88503ea4491a5a0e7daf64d4ea7
	  System UUID:                d3f22368-baf0-cc4c-80fb-62de8b17a3eb
	  Boot ID:                    3ba9e895-59b6-44a5-8d54-54e038a6d950
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-4lvtb                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-5dd5756b68-5qhb2                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	  kube-system                 etcd-multinode-421600                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m12s
	  kube-system                 kindnet-447dh                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-apiserver-multinode-421600             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-controller-manager-multinode-421600    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-proxy-fpk6m                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-scheduler-multinode-421600             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 25m                    kube-proxy       
	  Normal  Starting                 5m10s                  kube-proxy       
	  Normal  NodeHasSufficientPID     25m                    kubelet          Node multinode-421600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25m                    kubelet          Node multinode-421600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m                    kubelet          Node multinode-421600 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 25m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           25m                    node-controller  Node multinode-421600 event: Registered Node multinode-421600 in Controller
	  Normal  NodeReady                25m                    kubelet          Node multinode-421600 status is now: NodeReady
	  Normal  Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m17s (x8 over 5m17s)  kubelet          Node multinode-421600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m17s (x8 over 5m17s)  kubelet          Node multinode-421600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m17s (x7 over 5m17s)  kubelet          Node multinode-421600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m59s                  node-controller  Node multinode-421600 event: Registered Node multinode-421600 in Controller
	
	
	Name:               multinode-421600-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-421600-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=multinode-421600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_29T19_15_24_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 19:13:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-421600-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 19:16:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 19:13:29 +0000   Thu, 29 Feb 2024 19:13:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 19:13:29 +0000   Thu, 29 Feb 2024 19:13:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 19:13:29 +0000   Thu, 29 Feb 2024 19:13:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 19:13:29 +0000   Thu, 29 Feb 2024 19:13:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.62.204
	  Hostname:    multinode-421600-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 18e9eeff3c4f42ab848f8715baa3a924
	  System UUID:                6a36fbf6-756c-e04e-acf4-cc2e8747fe39
	  Boot ID:                    08a69872-a2d2-4cca-9832-bc6c850a497b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-jdv8q    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 kindnet-zblbg               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	  kube-system                 kube-proxy-7c7xc            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m56s                  kube-proxy       
	  Normal  Starting                 22m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  22m (x5 over 22m)      kubelet          Node multinode-421600-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x5 over 22m)      kubelet          Node multinode-421600-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x5 over 22m)      kubelet          Node multinode-421600-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                    kubelet          Node multinode-421600-m02 status is now: NodeReady
	  Normal  Starting                 2m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m58s (x2 over 2m58s)  kubelet          Node multinode-421600-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m58s (x2 over 2m58s)  kubelet          Node multinode-421600-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m58s (x2 over 2m58s)  kubelet          Node multinode-421600-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m54s                  node-controller  Node multinode-421600-m02 event: Registered Node multinode-421600-m02 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node multinode-421600-m02 status is now: NodeReady
	
	
	Name:               multinode-421600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-421600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9e09574fdbf0719156a1e892f7aeb8b71f0cf19
	                    minikube.k8s.io/name=multinode-421600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_02_29T19_15_24_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 19:15:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-421600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 19:16:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 19:16:01 +0000   Thu, 29 Feb 2024 19:15:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 19:16:01 +0000   Thu, 29 Feb 2024 19:15:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 19:16:01 +0000   Thu, 29 Feb 2024 19:15:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 19:16:01 +0000   Thu, 29 Feb 2024 19:16:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.59.9
	  Hostname:    multinode-421600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 444ab8d60a1441d7aa9b88db0c0a8205
	  System UUID:                47cfe47f-f06a-994b-9959-5c0c745d75f9
	  Boot ID:                    1765117f-8c53-45f0-af9c-a2bebf8f9981
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-7nzdd       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-proxy-rhg8l    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 18m                  kube-proxy  
	  Normal  Starting                 43s                  kube-proxy  
	  Normal  Starting                 9m5s                 kube-proxy  
	  Normal  NodeHasSufficientMemory  18m (x5 over 18m)    kubelet     Node multinode-421600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x5 over 18m)    kubelet     Node multinode-421600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x5 over 18m)    kubelet     Node multinode-421600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                  kubelet     Node multinode-421600-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     9m8s (x2 over 9m8s)  kubelet     Node multinode-421600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    9m8s (x2 over 9m8s)  kubelet     Node multinode-421600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  9m8s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m8s (x2 over 9m8s)  kubelet     Node multinode-421600-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m8s                 kubelet     Starting kubelet.
	  Normal  NodeReady                9m4s                 kubelet     Node multinode-421600-m03 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x2 over 59s)    kubelet     Node multinode-421600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x2 over 59s)    kubelet     Node multinode-421600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x2 over 59s)    kubelet     Node multinode-421600-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                21s                  kubelet     Node multinode-421600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.058729] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.026499] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	              * this clock source is slow. Consider trying other clock sources
	[  +5.856701] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.747116] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.819587] systemd-fstab-generator[113]: Ignoring "noauto" option for root device
	[  +7.432467] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb29 19:10] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.194585] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[ +24.099747] systemd-fstab-generator[927]: Ignoring "noauto" option for root device
	[  +0.109196] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.469305] systemd-fstab-generator[967]: Ignoring "noauto" option for root device
	[  +0.192727] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	[  +0.226557] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[  +1.895094] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +0.200203] systemd-fstab-generator[1172]: Ignoring "noauto" option for root device
	[  +0.181199] systemd-fstab-generator[1184]: Ignoring "noauto" option for root device
	[Feb29 19:11] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +3.751660] systemd-fstab-generator[1420]: Ignoring "noauto" option for root device
	[  +0.097021] kauditd_printk_skb: 205 callbacks suppressed
	[  +7.177568] kauditd_printk_skb: 62 callbacks suppressed
	[ +11.790264] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.340119] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [9344d56bc8a3] <==
	{"level":"info","ts":"2024-02-29T19:11:06.742731Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T19:11:06.742907Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T19:11:06.745807Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T19:11:06.747322Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.26.52.109:2380"}
	{"level":"info","ts":"2024-02-29T19:11:06.747393Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.26.52.109:2380"}
	{"level":"info","ts":"2024-02-29T19:11:06.74582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2288677eaed97ae9 switched to configuration voters=(2488352587919227625)"}
	{"level":"info","ts":"2024-02-29T19:11:06.747857Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"2288677eaed97ae9","initial-advertise-peer-urls":["https://172.26.52.109:2380"],"listen-peer-urls":["https://172.26.52.109:2380"],"advertise-client-urls":["https://172.26.52.109:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.26.52.109:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T19:11:06.747898Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T19:11:06.751089Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3ab6b21c82a909c4","local-member-id":"2288677eaed97ae9","added-peer-id":"2288677eaed97ae9","added-peer-peer-urls":["https://172.26.62.28:2380"]}
	{"level":"info","ts":"2024-02-29T19:11:06.751305Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3ab6b21c82a909c4","local-member-id":"2288677eaed97ae9","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:11:06.751426Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:11:08.61404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2288677eaed97ae9 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T19:11:08.614301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2288677eaed97ae9 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T19:11:08.614474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2288677eaed97ae9 received MsgPreVoteResp from 2288677eaed97ae9 at term 2"}
	{"level":"info","ts":"2024-02-29T19:11:08.614574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2288677eaed97ae9 became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T19:11:08.614689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2288677eaed97ae9 received MsgVoteResp from 2288677eaed97ae9 at term 3"}
	{"level":"info","ts":"2024-02-29T19:11:08.614804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2288677eaed97ae9 became leader at term 3"}
	{"level":"info","ts":"2024-02-29T19:11:08.614997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2288677eaed97ae9 elected leader 2288677eaed97ae9 at term 3"}
	{"level":"info","ts":"2024-02-29T19:11:08.622255Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"2288677eaed97ae9","local-member-attributes":"{Name:multinode-421600 ClientURLs:[https://172.26.52.109:2379]}","request-path":"/0/members/2288677eaed97ae9/attributes","cluster-id":"3ab6b21c82a909c4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T19:11:08.622457Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T19:11:08.623188Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T19:11:08.624422Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T19:11:08.625384Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.26.52.109:2379"}
	{"level":"info","ts":"2024-02-29T19:11:08.702107Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T19:11:08.702218Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:16:23 up 6 min,  0 users,  load average: 0.13, 0.16, 0.08
	Linux multinode-421600 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [799626a38bfd] <==
	I0229 19:15:33.665030       1 main.go:250] Node multinode-421600-m03 has CIDR [10.244.2.0/24] 
	I0229 19:15:43.671794       1 main.go:223] Handling node with IPs: map[172.26.52.109:{}]
	I0229 19:15:43.671836       1 main.go:227] handling current node
	I0229 19:15:43.671848       1 main.go:223] Handling node with IPs: map[172.26.62.204:{}]
	I0229 19:15:43.671855       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	I0229 19:15:43.671967       1 main.go:223] Handling node with IPs: map[172.26.59.9:{}]
	I0229 19:15:43.672073       1 main.go:250] Node multinode-421600-m03 has CIDR [10.244.2.0/24] 
	I0229 19:15:53.686503       1 main.go:223] Handling node with IPs: map[172.26.52.109:{}]
	I0229 19:15:53.686608       1 main.go:227] handling current node
	I0229 19:15:53.686621       1 main.go:223] Handling node with IPs: map[172.26.62.204:{}]
	I0229 19:15:53.686629       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	I0229 19:15:53.687334       1 main.go:223] Handling node with IPs: map[172.26.59.9:{}]
	I0229 19:15:53.687414       1 main.go:250] Node multinode-421600-m03 has CIDR [10.244.2.0/24] 
	I0229 19:16:03.693583       1 main.go:223] Handling node with IPs: map[172.26.52.109:{}]
	I0229 19:16:03.693691       1 main.go:227] handling current node
	I0229 19:16:03.693705       1 main.go:223] Handling node with IPs: map[172.26.62.204:{}]
	I0229 19:16:03.693714       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	I0229 19:16:03.694177       1 main.go:223] Handling node with IPs: map[172.26.59.9:{}]
	I0229 19:16:03.694212       1 main.go:250] Node multinode-421600-m03 has CIDR [10.244.2.0/24] 
	I0229 19:16:13.708092       1 main.go:223] Handling node with IPs: map[172.26.52.109:{}]
	I0229 19:16:13.708262       1 main.go:227] handling current node
	I0229 19:16:13.708277       1 main.go:223] Handling node with IPs: map[172.26.62.204:{}]
	I0229 19:16:13.708285       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	I0229 19:16:13.708873       1 main.go:223] Handling node with IPs: map[172.26.59.9:{}]
	I0229 19:16:13.708959       1 main.go:250] Node multinode-421600-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [92f6a9511f4f] <==
	I0229 19:07:19.901192       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.26.50.77 Flags: [] Table: 0} 
	I0229 19:07:29.912910       1 main.go:223] Handling node with IPs: map[172.26.62.28:{}]
	I0229 19:07:29.913010       1 main.go:227] handling current node
	I0229 19:07:29.913024       1 main.go:223] Handling node with IPs: map[172.26.56.47:{}]
	I0229 19:07:29.913033       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	I0229 19:07:29.913209       1 main.go:223] Handling node with IPs: map[172.26.50.77:{}]
	I0229 19:07:29.913275       1 main.go:250] Node multinode-421600-m03 has CIDR [10.244.3.0/24] 
	I0229 19:07:39.929780       1 main.go:223] Handling node with IPs: map[172.26.62.28:{}]
	I0229 19:07:39.929906       1 main.go:227] handling current node
	I0229 19:07:39.929919       1 main.go:223] Handling node with IPs: map[172.26.56.47:{}]
	I0229 19:07:39.929928       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	I0229 19:07:39.930321       1 main.go:223] Handling node with IPs: map[172.26.50.77:{}]
	I0229 19:07:39.930338       1 main.go:250] Node multinode-421600-m03 has CIDR [10.244.3.0/24] 
	I0229 19:07:49.945640       1 main.go:223] Handling node with IPs: map[172.26.62.28:{}]
	I0229 19:07:49.945677       1 main.go:227] handling current node
	I0229 19:07:49.945688       1 main.go:223] Handling node with IPs: map[172.26.56.47:{}]
	I0229 19:07:49.945695       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	I0229 19:07:49.946255       1 main.go:223] Handling node with IPs: map[172.26.50.77:{}]
	I0229 19:07:49.946289       1 main.go:250] Node multinode-421600-m03 has CIDR [10.244.3.0/24] 
	I0229 19:07:59.952748       1 main.go:223] Handling node with IPs: map[172.26.62.28:{}]
	I0229 19:07:59.952957       1 main.go:227] handling current node
	I0229 19:07:59.952974       1 main.go:223] Handling node with IPs: map[172.26.56.47:{}]
	I0229 19:07:59.952984       1 main.go:250] Node multinode-421600-m02 has CIDR [10.244.1.0/24] 
	I0229 19:07:59.953632       1 main.go:223] Handling node with IPs: map[172.26.50.77:{}]
	I0229 19:07:59.953720       1 main.go:250] Node multinode-421600-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [fdbd65658435] <==
	I0229 19:11:10.420401       1 controller.go:78] Starting OpenAPI AggregationController
	I0229 19:11:10.427289       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0229 19:11:10.427587       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0229 19:11:10.587649       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 19:11:10.613524       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 19:11:10.616592       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 19:11:10.617899       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 19:11:10.618789       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0229 19:11:10.618816       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0229 19:11:10.619700       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 19:11:10.620424       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 19:11:10.621675       1 aggregator.go:166] initial CRD sync complete...
	I0229 19:11:10.621699       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 19:11:10.621706       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 19:11:10.621713       1 cache.go:39] Caches are synced for autoregister controller
	I0229 19:11:10.629843       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 19:11:11.416225       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0229 19:11:11.848922       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.26.52.109]
	I0229 19:11:11.850579       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 19:11:11.863375       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0229 19:11:13.724835       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0229 19:11:13.892145       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0229 19:11:13.904685       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0229 19:11:13.983733       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0229 19:11:13.993915       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [52fe82a87fa8] <==
	I0229 18:54:18.959262       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="16.29733ms"
	I0229 18:54:18.959352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="41.102µs"
	I0229 18:54:18.968585       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="74.004µs"
	I0229 18:54:18.969832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="85.404µs"
	I0229 18:54:20.840543       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="7.829566ms"
	I0229 18:54:20.841625       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.503µs"
	I0229 18:54:21.594353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="6.873507ms"
	I0229 18:54:21.594860       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.502µs"
	I0229 18:57:47.377950       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-421600-m02"
	I0229 18:57:47.378305       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-421600-m03\" does not exist"
	I0229 18:57:47.390200       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-421600-m03" podCIDRs=["10.244.2.0/24"]
	I0229 18:57:47.396377       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7nzdd"
	I0229 18:57:47.396417       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rhg8l"
	I0229 18:57:50.612841       1 event.go:307] "Event occurred" object="multinode-421600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-421600-m03 event: Registered Node multinode-421600-m03 in Controller"
	I0229 18:57:50.613260       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-421600-m03"
	I0229 18:58:08.102532       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-421600-m02"
	I0229 19:05:05.731740       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-421600-m02"
	I0229 19:05:05.733572       1 event.go:307] "Event occurred" object="multinode-421600-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-421600-m03 status is now: NodeNotReady"
	I0229 19:05:05.750119       1 event.go:307] "Event occurred" object="kube-system/kindnet-7nzdd" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 19:05:05.767885       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-rhg8l" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0229 19:07:13.639611       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-421600-m02"
	I0229 19:07:14.943614       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-421600-m02"
	I0229 19:07:14.944539       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-421600-m03\" does not exist"
	I0229 19:07:14.955893       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-421600-m03" podCIDRs=["10.244.3.0/24"]
	I0229 19:07:18.431138       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-421600-m02"
	
	
	==> kube-controller-manager [dad2f1b1d2f0] <==
	I0229 19:13:20.335968       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-jdv8q"
	I0229 19:13:20.356137       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="32.167796ms"
	I0229 19:13:20.368708       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="12.477826ms"
	I0229 19:13:20.368797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="38.107µs"
	I0229 19:13:23.444664       1 event.go:307] "Event occurred" object="multinode-421600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-421600-m02 event: Removing Node multinode-421600-m02 from Controller"
	I0229 19:13:24.685344       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-421600-m02\" does not exist"
	I0229 19:13:24.687465       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-dk9k8" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-dk9k8"
	I0229 19:13:24.696832       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-421600-m02" podCIDRs=["10.244.1.0/24"]
	I0229 19:13:25.554762       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="480.79µs"
	I0229 19:13:28.445214       1 event.go:307] "Event occurred" object="multinode-421600-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-421600-m02 event: Registered Node multinode-421600-m02 in Controller"
	I0229 19:13:30.018823       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-421600-m02"
	I0229 19:13:30.043393       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="77.814µs"
	I0229 19:13:33.460199       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-dk9k8" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-dk9k8"
	I0229 19:13:35.706270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="91.217µs"
	I0229 19:13:35.955696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="165.031µs"
	I0229 19:13:35.959672       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="94.318µs"
	I0229 19:13:37.818874       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.51µs"
	I0229 19:13:37.846467       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="138.126µs"
	I0229 19:13:39.012174       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.017249ms"
	I0229 19:13:39.013454       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.908µs"
	I0229 19:15:21.827958       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-421600-m02"
	I0229 19:15:23.445277       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-421600-m03\" does not exist"
	I0229 19:15:23.445671       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-421600-m02"
	I0229 19:15:23.456464       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-421600-m03" podCIDRs=["10.244.2.0/24"]
	I0229 19:16:01.641746       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-421600-m02"
	
	
	==> kube-proxy [2f8a25ce65da] <==
	I0229 18:50:53.074708       1 server_others.go:69] "Using iptables proxy"
	I0229 18:50:53.092341       1 node.go:141] Successfully retrieved node IP: 172.26.62.28
	I0229 18:50:53.146378       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 18:50:53.146404       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 18:50:53.149985       1 server_others.go:152] "Using iptables Proxier"
	I0229 18:50:53.150312       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 18:50:53.150825       1 server.go:846] "Version info" version="v1.28.4"
	I0229 18:50:53.150851       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 18:50:53.151682       1 config.go:188] "Starting service config controller"
	I0229 18:50:53.152018       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 18:50:53.152128       1 config.go:97] "Starting endpoint slice config controller"
	I0229 18:50:53.152136       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 18:50:53.153089       1 config.go:315] "Starting node config controller"
	I0229 18:50:53.153102       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 18:50:53.254073       1 shared_informer.go:318] Caches are synced for node config
	I0229 18:50:53.254154       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 18:50:53.254168       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [7e4ebe33d701] <==
	I0229 19:11:12.277347       1 server_others.go:69] "Using iptables proxy"
	I0229 19:11:12.308590       1 node.go:141] Successfully retrieved node IP: 172.26.52.109
	I0229 19:11:12.390602       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0229 19:11:12.393421       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 19:11:12.398153       1 server_others.go:152] "Using iptables Proxier"
	I0229 19:11:12.400019       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 19:11:12.400756       1 server.go:846] "Version info" version="v1.28.4"
	I0229 19:11:12.401077       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 19:11:12.404719       1 config.go:188] "Starting service config controller"
	I0229 19:11:12.405933       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 19:11:12.406200       1 config.go:97] "Starting endpoint slice config controller"
	I0229 19:11:12.406268       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 19:11:12.407573       1 config.go:315] "Starting node config controller"
	I0229 19:11:12.407633       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 19:11:12.506769       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 19:11:12.506818       1 shared_informer.go:318] Caches are synced for service config
	I0229 19:11:12.507785       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b8c8786727c5] <==
	E0229 18:50:35.255198       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 18:50:36.212095       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 18:50:36.212964       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 18:50:36.212937       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 18:50:36.213254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 18:50:36.224082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 18:50:36.225483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 18:50:36.241435       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 18:50:36.241985       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 18:50:36.295277       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0229 18:50:36.295305       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0229 18:50:36.495754       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 18:50:36.496464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 18:50:36.536053       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0229 18:50:36.536397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0229 18:50:36.536343       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 18:50:36.536962       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 18:50:36.553109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0229 18:50:36.553299       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0229 18:50:36.560040       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 18:50:36.560240       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0229 18:50:39.441295       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 19:08:07.436193       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0229 19:08:07.436236       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0229 19:08:07.436454       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c6d6e0e1b0fa] <==
	I0229 19:11:08.602663       1 serving.go:348] Generated self-signed cert in-memory
	W0229 19:11:10.535829       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 19:11:10.536027       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 19:11:10.536100       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 19:11:10.536141       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 19:11:10.593024       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0229 19:11:10.593063       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 19:11:10.597299       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 19:11:10.597812       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 19:11:10.597906       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 19:11:10.601384       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 19:11:10.704620       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 19:12:05 multinode-421600 kubelet[1427]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:12:05 multinode-421600 kubelet[1427]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:12:05 multinode-421600 kubelet[1427]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:12:05 multinode-421600 kubelet[1427]: I0229 19:12:05.046677    1427 scope.go:117] "RemoveContainer" containerID="ea0adcda4ba9f4fbb2ab78e6f7d938d3c10a43a1725e6f4f0e4d992b3c5fd2f5"
	Feb 29 19:12:05 multinode-421600 kubelet[1427]: I0229 19:12:05.084890    1427 scope.go:117] "RemoveContainer" containerID="9245396d3b64c296d6e9af5e3c609401f9a00b846f8f083fe8eb196666cea945"
	Feb 29 19:13:05 multinode-421600 kubelet[1427]: E0229 19:13:05.032595    1427 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:13:05 multinode-421600 kubelet[1427]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:13:05 multinode-421600 kubelet[1427]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:13:05 multinode-421600 kubelet[1427]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:13:05 multinode-421600 kubelet[1427]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:14:05 multinode-421600 kubelet[1427]: E0229 19:14:05.030394    1427 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:14:05 multinode-421600 kubelet[1427]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:14:05 multinode-421600 kubelet[1427]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:14:05 multinode-421600 kubelet[1427]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:14:05 multinode-421600 kubelet[1427]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:15:05 multinode-421600 kubelet[1427]: E0229 19:15:05.035413    1427 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:15:05 multinode-421600 kubelet[1427]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:15:05 multinode-421600 kubelet[1427]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:15:05 multinode-421600 kubelet[1427]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:15:05 multinode-421600 kubelet[1427]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 29 19:16:05 multinode-421600 kubelet[1427]: E0229 19:16:05.030883    1427 iptables.go:575] "Could not set up iptables canary" err=<
	Feb 29 19:16:05 multinode-421600 kubelet[1427]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 29 19:16:05 multinode-421600 kubelet[1427]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 29 19:16:05 multinode-421600 kubelet[1427]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 29 19:16:05 multinode-421600 kubelet[1427]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:16:15.076557    1328 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-421600 -n multinode-421600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-421600 -n multinode-421600: (11.1238773s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-421600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (522.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (29.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 stop
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-421600 stop: exit status 1 (17.9601134s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-421600"  ...
	* Powering off "multinode-421600" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:17:34.372640    1992 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-windows-amd64.exe -p multinode-421600 stop": exit status 1
multinode_test.go:348: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-421600 status: context deadline exceeded (0s)
multinode_test.go:351: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-421600 status" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-421600 -n multinode-421600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-421600 -n multinode-421600: exit status 7 (11.1507164s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:17:52.344334    6732 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 19:18:03.363873    6732 status.go:352] failed to get driver ip: getting IP: IP not found
	E0229 19:18:03.363873    6732 status.go:249] status error: getting IP: IP not found

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-421600" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (29.12s)

                                                
                                    
x
+
TestKubernetesUpgrade (1402.15s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-800700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-800700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv: exit status 109 (10m42.1886422s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-800700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node kubernetes-upgrade-800700 in cluster kubernetes-upgrade-800700
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:31:19.618560    8040 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 19:31:19.694562    8040 out.go:291] Setting OutFile to fd 1872 ...
	I0229 19:31:19.694562    8040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:31:19.694562    8040 out.go:304] Setting ErrFile to fd 1524...
	I0229 19:31:19.694562    8040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:31:19.716570    8040 out.go:298] Setting JSON to false
	I0229 19:31:19.718562    8040 start.go:129] hostinfo: {"hostname":"minikube5","uptime":56816,"bootTime":1709178263,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 19:31:19.719569    8040 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 19:31:19.720559    8040 out.go:177] * [kubernetes-upgrade-800700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 19:31:19.721568    8040 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:31:19.721568    8040 notify.go:220] Checking for updates...
	I0229 19:31:19.721568    8040 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 19:31:19.722569    8040 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 19:31:19.722569    8040 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 19:31:19.723562    8040 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 19:31:19.724562    8040 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 19:31:25.652134    8040 out.go:177] * Using the hyperv driver based on user configuration
	I0229 19:31:25.652805    8040 start.go:299] selected driver: hyperv
	I0229 19:31:25.652805    8040 start.go:903] validating driver "hyperv" against <nil>
	I0229 19:31:25.652925    8040 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 19:31:25.697026    8040 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 19:31:25.698025    8040 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 19:31:25.698025    8040 cni.go:84] Creating CNI manager for ""
	I0229 19:31:25.698025    8040 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 19:31:25.698025    8040 start_flags.go:323] config:
	{Name:kubernetes-upgrade-800700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-800700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:31:25.698025    8040 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:31:25.700026    8040 out.go:177] * Starting control plane node kubernetes-upgrade-800700 in cluster kubernetes-upgrade-800700
	I0229 19:31:25.700026    8040 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 19:31:25.700026    8040 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 19:31:25.700026    8040 cache.go:56] Caching tarball of preloaded images
	I0229 19:31:25.700026    8040 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 19:31:25.701026    8040 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0229 19:31:25.701026    8040 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\config.json ...
	I0229 19:31:25.701026    8040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\config.json: {Name:mkb080dcc3546c5a4af06bcd8f4cd5da14a2e123 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:31:25.702028    8040 start.go:365] acquiring machines lock for kubernetes-upgrade-800700: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 19:35:26.320669    8040 start.go:369] acquired machines lock for "kubernetes-upgrade-800700" in 4m0.6052036s
	I0229 19:35:26.320745    8040 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-800700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-800700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 19:35:26.321282    8040 start.go:125] createHost starting for "" (driver="hyperv")
	I0229 19:35:26.322333    8040 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0229 19:35:26.322612    8040 start.go:159] libmachine.API.Create for "kubernetes-upgrade-800700" (driver="hyperv")
	I0229 19:35:26.322612    8040 client.go:168] LocalClient.Create starting
	I0229 19:35:26.323520    8040 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 19:35:26.323866    8040 main.go:141] libmachine: Decoding PEM data...
	I0229 19:35:26.323939    8040 main.go:141] libmachine: Parsing certificate...
	I0229 19:35:26.324215    8040 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 19:35:26.324393    8040 main.go:141] libmachine: Decoding PEM data...
	I0229 19:35:26.324393    8040 main.go:141] libmachine: Parsing certificate...
	I0229 19:35:26.324588    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 19:35:28.112001    8040 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 19:35:28.112001    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:35:28.112001    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 19:35:29.763797    8040 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 19:35:29.763797    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:35:29.763797    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 19:35:31.238252    8040 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 19:35:31.239937    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:35:31.239937    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 19:35:34.869623    8040 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 19:35:34.869884    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:35:34.871793    8040 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 19:35:35.201723    8040 main.go:141] libmachine: Creating SSH key...
	I0229 19:35:35.582543    8040 main.go:141] libmachine: Creating VM...
	I0229 19:35:35.583060    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 19:35:38.383479    8040 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 19:35:38.383479    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:35:38.383479    8040 main.go:141] libmachine: Using switch "Default Switch"
	I0229 19:35:38.383479    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 19:35:40.009698    8040 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 19:35:40.009837    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:35:40.009837    8040 main.go:141] libmachine: Creating VHD
	I0229 19:35:40.009837    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-800700\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 19:35:43.767209    8040 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-800700\
	                          fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : A6D1926D-88F7-4932-8882-1FAFC5553021
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 19:35:43.774951    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:35:43.774951    8040 main.go:141] libmachine: Writing magic tar header
	I0229 19:35:43.774951    8040 main.go:141] libmachine: Writing SSH key tar header
	I0229 19:35:43.783822    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-800700\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-800700\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 19:35:46.862699    8040 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:35:46.872298    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:35:46.872298    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-800700\disk.vhd' -SizeBytes 20000MB
	I0229 19:35:49.261935    8040 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:35:49.261935    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:35:49.261935    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM kubernetes-upgrade-800700 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-800700' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0229 19:35:54.155123    8040 main.go:141] libmachine: [stdout =====>] : 
	Name                      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                      ----- ----------- ----------------- ------   ------             -------
	kubernetes-upgrade-800700 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 19:35:54.155123    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:35:54.155123    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName kubernetes-upgrade-800700 -DynamicMemoryEnabled $false
	I0229 19:35:56.354653    8040 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:35:56.360465    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:35:56.360465    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor kubernetes-upgrade-800700 -Count 2
	I0229 19:35:58.953729    8040 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:35:58.953729    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:35:58.953729    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName kubernetes-upgrade-800700 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-800700\boot2docker.iso'
	I0229 19:36:01.301991    8040 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:36:01.301991    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:01.301991    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName kubernetes-upgrade-800700 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-800700\disk.vhd'
	I0229 19:36:03.701933    8040 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:36:03.701933    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:03.701933    8040 main.go:141] libmachine: Starting VM...
	I0229 19:36:03.701933    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kubernetes-upgrade-800700
	I0229 19:36:06.292146    8040 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:36:06.304158    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:06.304158    8040 main.go:141] libmachine: Waiting for host to start...
	I0229 19:36:06.304158    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:36:08.385670    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:36:08.388739    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:08.388739    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:36:10.693177    8040 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:36:10.693177    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:11.698583    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:36:13.730916    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:36:13.731107    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:13.731107    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:36:16.015474    8040 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:36:16.020566    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:17.034330    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:36:19.103640    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:36:19.103640    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:19.113641    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:36:21.499951    8040 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:36:21.499951    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:22.514842    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:36:24.468256    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:36:24.468256    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:24.473301    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:36:26.856045    8040 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:36:26.856724    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:27.876692    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:36:29.938981    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:36:29.938981    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:29.939077    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:36:32.363243    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:36:32.363312    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:32.363312    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:36:34.377947    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:36:34.377947    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:34.377947    8040 machine.go:88] provisioning docker machine ...
	I0229 19:36:34.377947    8040 buildroot.go:166] provisioning hostname "kubernetes-upgrade-800700"
	I0229 19:36:34.377947    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:36:36.366829    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:36:36.366829    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:36.366829    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:36:38.810560    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:36:38.810560    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:38.816634    8040 main.go:141] libmachine: Using SSH client type: native
	I0229 19:36:38.824008    8040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.56.26 22 <nil> <nil>}
	I0229 19:36:38.824008    8040 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-800700 && echo "kubernetes-upgrade-800700" | sudo tee /etc/hostname
	I0229 19:36:38.992880    8040 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-800700
	
	I0229 19:36:38.992965    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:36:41.040012    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:36:41.040012    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:41.040012    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:36:43.418994    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:36:43.418994    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:43.425511    8040 main.go:141] libmachine: Using SSH client type: native
	I0229 19:36:43.426162    8040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.56.26 22 <nil> <nil>}
	I0229 19:36:43.426251    8040 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-800700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-800700/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-800700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 19:36:43.588201    8040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 19:36:43.588201    8040 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 19:36:43.588201    8040 buildroot.go:174] setting up certificates
	I0229 19:36:43.588201    8040 provision.go:83] configureAuth start
	I0229 19:36:43.588201    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:36:45.576424    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:36:45.576424    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:45.576424    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:36:47.987194    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:36:47.987242    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:47.987289    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:36:50.043891    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:36:50.044488    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:50.044488    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:36:52.449068    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:36:52.449068    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:52.449157    8040 provision.go:138] copyHostCerts
	I0229 19:36:52.449675    8040 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 19:36:52.449675    8040 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 19:36:52.450208    8040 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 19:36:52.451732    8040 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 19:36:52.451732    8040 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 19:36:52.452273    8040 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 19:36:52.453202    8040 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 19:36:52.453202    8040 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 19:36:52.453745    8040 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 19:36:52.454701    8040 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-800700 san=[172.26.56.26 172.26.56.26 localhost 127.0.0.1 minikube kubernetes-upgrade-800700]
	I0229 19:36:52.885477    8040 provision.go:172] copyRemoteCerts
	I0229 19:36:52.893539    8040 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 19:36:52.893539    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:36:54.871444    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:36:54.871444    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:54.871702    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:36:57.277388    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:36:57.277590    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:57.277590    8040 sshutil.go:53] new ssh client: &{IP:172.26.56.26 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-800700\id_rsa Username:docker}
	I0229 19:36:57.388207    8040 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4944177s)
	I0229 19:36:57.389113    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 19:36:57.438257    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0229 19:36:57.498833    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 19:36:57.549919    8040 provision.go:86] duration metric: configureAuth took 13.9609415s
	I0229 19:36:57.549919    8040 buildroot.go:189] setting minikube options for container-runtime
	I0229 19:36:57.550541    8040 config.go:182] Loaded profile config "kubernetes-upgrade-800700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0229 19:36:57.550541    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:36:59.540088    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:36:59.540277    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:36:59.540344    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:37:01.866546    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:37:01.866546    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:01.870100    8040 main.go:141] libmachine: Using SSH client type: native
	I0229 19:37:01.870100    8040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.56.26 22 <nil> <nil>}
	I0229 19:37:01.870100    8040 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 19:37:01.997288    8040 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 19:37:01.997379    8040 buildroot.go:70] root file system type: tmpfs
	I0229 19:37:01.997523    8040 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 19:37:01.997523    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:37:03.997604    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:37:03.997604    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:03.997750    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:37:06.415873    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:37:06.415873    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:06.420108    8040 main.go:141] libmachine: Using SSH client type: native
	I0229 19:37:06.420528    8040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.56.26 22 <nil> <nil>}
	I0229 19:37:06.420635    8040 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 19:37:06.588521    8040 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 19:37:06.588521    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:37:08.600427    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:37:08.600427    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:08.600498    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:37:11.015290    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:37:11.015290    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:11.019386    8040 main.go:141] libmachine: Using SSH client type: native
	I0229 19:37:11.019787    8040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.56.26 22 <nil> <nil>}
	I0229 19:37:11.019787    8040 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 19:37:13.871813    8040 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 19:37:13.871813    8040 machine.go:91] provisioned docker machine in 39.4916705s
	I0229 19:37:13.871813    8040 client.go:171] LocalClient.Create took 1m47.5432187s
	I0229 19:37:13.871813    8040 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-800700" took 1m47.5432187s
	I0229 19:37:13.872419    8040 start.go:300] post-start starting for "kubernetes-upgrade-800700" (driver="hyperv")
	I0229 19:37:13.872476    8040 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 19:37:13.883029    8040 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 19:37:13.883029    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:37:15.877170    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:37:15.877170    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:15.877170    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:37:18.307472    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:37:18.307472    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:18.308296    8040 sshutil.go:53] new ssh client: &{IP:172.26.56.26 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-800700\id_rsa Username:docker}
	I0229 19:37:18.421621    8040 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5383398s)
	I0229 19:37:18.430617    8040 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 19:37:18.437975    8040 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 19:37:18.437975    8040 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 19:37:18.438980    8040 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 19:37:18.438980    8040 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> 43562.pem in /etc/ssl/certs
	I0229 19:37:18.447980    8040 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 19:37:18.466139    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /etc/ssl/certs/43562.pem (1708 bytes)
	I0229 19:37:18.514897    8040 start.go:303] post-start completed in 4.6421984s
	I0229 19:37:18.515884    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:37:20.555684    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:37:20.555684    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:20.555884    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:37:23.021454    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:37:23.021454    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:23.021591    8040 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\config.json ...
	I0229 19:37:23.024734    8040 start.go:128] duration metric: createHost completed in 1m56.6963582s
	I0229 19:37:23.024824    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:37:25.064944    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:37:25.064944    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:25.064944    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:37:27.667664    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:37:27.667664    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:27.672620    8040 main.go:141] libmachine: Using SSH client type: native
	I0229 19:37:27.672736    8040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.56.26 22 <nil> <nil>}
	I0229 19:37:27.672736    8040 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 19:37:27.800322    8040 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709235447.964692801
	
	I0229 19:37:27.800396    8040 fix.go:206] guest clock: 1709235447.964692801
	I0229 19:37:27.800396    8040 fix.go:219] Guest: 2024-02-29 19:37:27.964692801 +0000 UTC Remote: 2024-02-29 19:37:23.0247345 +0000 UTC m=+363.479420501 (delta=4.939958301s)
	I0229 19:37:27.800528    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:37:29.809555    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:37:29.809555    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:29.809555    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:37:32.441076    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:37:32.441134    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:32.446036    8040 main.go:141] libmachine: Using SSH client type: native
	I0229 19:37:32.446739    8040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.56.26 22 <nil> <nil>}
	I0229 19:37:32.446739    8040 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709235447
	I0229 19:37:32.591283    8040 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 19:37:27 UTC 2024
	
	I0229 19:37:32.591283    8040 fix.go:226] clock set: Thu Feb 29 19:37:27 UTC 2024
	 (err=<nil>)
	I0229 19:37:32.591283    8040 start.go:83] releasing machines lock for "kubernetes-upgrade-800700", held for 2m6.2635143s
	I0229 19:37:32.591283    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:37:34.560163    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:37:34.560715    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:34.560789    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:37:36.944288    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:37:36.944559    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:36.949944    8040 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 19:37:36.950200    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:37:36.959405    8040 ssh_runner.go:195] Run: cat /version.json
	I0229 19:37:36.959405    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:37:38.968400    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:37:38.968400    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:38.968473    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:37:38.969076    8040 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:37:38.969076    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:38.969076    8040 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:37:41.315900    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:37:41.316135    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:41.316226    8040 sshutil.go:53] new ssh client: &{IP:172.26.56.26 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-800700\id_rsa Username:docker}
	I0229 19:37:41.349207    8040 main.go:141] libmachine: [stdout =====>] : 172.26.56.26
	
	I0229 19:37:41.349207    8040 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:37:41.349207    8040 sshutil.go:53] new ssh client: &{IP:172.26.56.26 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-800700\id_rsa Username:docker}
	I0229 19:37:41.404823    8040 ssh_runner.go:235] Completed: cat /version.json: (4.4451714s)
	I0229 19:37:41.415253    8040 ssh_runner.go:195] Run: systemctl --version
	I0229 19:37:41.500893    8040 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5506272s)
	I0229 19:37:41.512141    8040 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 19:37:41.521662    8040 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 19:37:41.530912    8040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0229 19:37:41.560864    8040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0229 19:37:41.590649    8040 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 19:37:41.590649    8040 start.go:475] detecting cgroup driver to use...
	I0229 19:37:41.590880    8040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:37:41.643602    8040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0229 19:37:41.674725    8040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 19:37:41.695887    8040 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 19:37:41.704840    8040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 19:37:41.734213    8040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 19:37:41.763196    8040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 19:37:41.801630    8040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 19:37:41.834085    8040 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 19:37:41.876268    8040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 19:37:41.916766    8040 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 19:37:41.944603    8040 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 19:37:41.972638    8040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:37:42.171653    8040 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 19:37:42.207788    8040 start.go:475] detecting cgroup driver to use...
	I0229 19:37:42.216426    8040 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 19:37:42.248343    8040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 19:37:42.278886    8040 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 19:37:42.310848    8040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 19:37:42.345204    8040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 19:37:42.384877    8040 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 19:37:42.438392    8040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 19:37:42.461849    8040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 19:37:42.512901    8040 ssh_runner.go:195] Run: which cri-dockerd
	I0229 19:37:42.529184    8040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 19:37:42.549037    8040 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 19:37:42.597075    8040 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 19:37:42.797068    8040 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 19:37:42.995779    8040 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 19:37:42.996031    8040 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 19:37:43.040959    8040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:37:43.241988    8040 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 19:37:44.987401    8040 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.745316s)
	I0229 19:37:44.994463    8040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 19:37:45.037750    8040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 19:37:45.084885    8040 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.7 ...
	I0229 19:37:45.085923    8040 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 19:37:45.089180    8040 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 19:37:45.089557    8040 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 19:37:45.089557    8040 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 19:37:45.089557    8040 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:76:3f:19 Flags:up|broadcast|multicast|running}
	I0229 19:37:45.093986    8040 ip.go:210] interface addr: fe80::841a:4367:8c9:abc/64
	I0229 19:37:45.094074    8040 ip.go:210] interface addr: 172.26.48.1/20
	I0229 19:37:45.102549    8040 ssh_runner.go:195] Run: grep 172.26.48.1	host.minikube.internal$ /etc/hosts
	I0229 19:37:45.113557    8040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:37:45.138859    8040 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 19:37:45.145319    8040 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 19:37:45.170710    8040 docker.go:685] Got preloaded images: 
	I0229 19:37:45.170893    8040 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 19:37:45.179131    8040 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 19:37:45.208115    8040 ssh_runner.go:195] Run: which lz4
	I0229 19:37:45.221934    8040 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 19:37:45.227860    8040 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 19:37:45.227860    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0229 19:37:46.969446    8040 docker.go:649] Took 1.755405 seconds to copy over tarball
	I0229 19:37:46.982487    8040 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 19:37:57.511575    8040 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (10.5284692s)
	I0229 19:37:57.511575    8040 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 19:37:57.582414    8040 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 19:37:57.601996    8040 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0229 19:37:57.643001    8040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 19:37:57.836027    8040 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 19:37:59.531195    8040 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6950732s)
	I0229 19:37:59.539196    8040 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 19:37:59.568813    8040 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0229 19:37:59.568813    8040 docker.go:691] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0229 19:37:59.568813    8040 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 19:37:59.582825    8040 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0229 19:37:59.583827    8040 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 19:37:59.589825    8040 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:37:59.590831    8040 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 19:37:59.593821    8040 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0229 19:37:59.593821    8040 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 19:37:59.596823    8040 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 19:37:59.596823    8040 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 19:37:59.596823    8040 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0229 19:37:59.597817    8040 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0229 19:37:59.602835    8040 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:37:59.603858    8040 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 19:37:59.606875    8040 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 19:37:59.608813    8040 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0229 19:37:59.608813    8040 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 19:37:59.610836    8040 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	W0229 19:37:59.683077    8040 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 19:37:59.761821    8040 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 19:37:59.840011    8040 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 19:37:59.915410    8040 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 19:37:59.993501    8040 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 19:38:00.086454    8040 image.go:187] authn lookup for registry.k8s.io/etcd:3.3.15-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 19:38:00.114838    8040 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0229 19:38:00.136946    8040 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:38:00.160901    8040 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0229 19:38:00.170873    8040 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0229 19:38:00.170873    8040 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0229 19:38:00.171003    8040 docker.go:337] Removing image: registry.k8s.io/pause:3.1
	W0229 19:38:00.177006    8040 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.16.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 19:38:00.179057    8040 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0229 19:38:00.185594    8040 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0229 19:38:00.216578    8040 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 19:38:00.253964    8040 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0229 19:38:00.253964    8040 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0229 19:38:00.253964    8040 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0229 19:38:00.253964    8040 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0229 19:38:00.253964    8040 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.16.0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0229 19:38:00.254971    8040 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.16.0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0229 19:38:00.254971    8040 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.16.0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0229 19:38:00.254971    8040 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0229 19:38:00.255042    8040 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0229 19:38:00.255042    8040 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	W0229 19:38:00.255437    8040 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 19:38:00.263869    8040 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0229 19:38:00.264829    8040 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0229 19:38:00.265448    8040 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0229 19:38:00.297475    8040 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.16.0
	I0229 19:38:00.309517    8040 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0229 19:38:00.311158    8040 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.16.0
	I0229 19:38:00.337008    8040 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.16.0
	I0229 19:38:00.337066    8040 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0229 19:38:00.337066    8040 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.3.15-0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0229 19:38:00.337118    8040 docker.go:337] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0229 19:38:00.343966    8040 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0229 19:38:00.371620    8040 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.3.15-0
	I0229 19:38:00.411779    8040 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0229 19:38:00.436669    8040 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0229 19:38:00.436669    8040 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.16.0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0229 19:38:00.436765    8040 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0229 19:38:00.443240    8040 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0229 19:38:00.471234    8040 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.16.0
	I0229 19:38:00.515301    8040 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0229 19:38:00.545259    8040 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0229 19:38:00.545259    8040 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.2 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0229 19:38:00.545259    8040 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.2
	I0229 19:38:00.551818    8040 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0229 19:38:00.574606    8040 cache_images.go:286] Loading image from: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.2
	I0229 19:38:00.574606    8040 cache_images.go:92] LoadImages completed in 1.0057364s
	W0229 19:38:00.575607    8040 out.go:239] X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1: The system cannot find the file specified.
	X Unable to load cached images: loading cached images: CreateFile C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1: The system cannot find the file specified.
	I0229 19:38:00.581613    8040 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 19:38:00.620725    8040 cni.go:84] Creating CNI manager for ""
	I0229 19:38:00.621452    8040 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 19:38:00.621452    8040 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0229 19:38:00.621452    8040 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.26.56.26 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-800700 NodeName:kubernetes-upgrade-800700 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.26.56.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.26.56.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0229 19:38:00.621452    8040 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.26.56.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-800700"
	  kubeletExtraArgs:
	    node-ip: 172.26.56.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.26.56.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-800700
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://172.26.56.26:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0229 19:38:00.621452    8040 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-800700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.26.56.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-800700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0229 19:38:00.630272    8040 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0229 19:38:00.648445    8040 binaries.go:44] Found k8s binaries, skipping transfer
	I0229 19:38:00.657443    8040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0229 19:38:00.676051    8040 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0229 19:38:00.714480    8040 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0229 19:38:00.750163    8040 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0229 19:38:00.790149    8040 ssh_runner.go:195] Run: grep 172.26.56.26	control-plane.minikube.internal$ /etc/hosts
	I0229 19:38:00.796889    8040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.26.56.26	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 19:38:00.820725    8040 certs.go:56] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700 for IP: 172.26.56.26
	I0229 19:38:00.820725    8040 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:38:00.821348    8040 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0229 19:38:00.821348    8040 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0229 19:38:00.822073    8040 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\client.key
	I0229 19:38:00.822073    8040 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\client.crt with IP's: []
	I0229 19:38:01.174387    8040 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\client.crt ...
	I0229 19:38:01.174387    8040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\client.crt: {Name:mkdd5a688ccf6bc7b11af9f3fbdf0c5eadc79b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:38:01.175388    8040 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\client.key ...
	I0229 19:38:01.175388    8040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\client.key: {Name:mk507d1fba9c8537a7a0c747a06efee0e8c1a367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:38:01.176333    8040 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\apiserver.key.5c257261
	I0229 19:38:01.176333    8040 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\apiserver.crt.5c257261 with IP's: [172.26.56.26 10.96.0.1 127.0.0.1 10.0.0.1]
	I0229 19:38:01.280333    8040 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\apiserver.crt.5c257261 ...
	I0229 19:38:01.280333    8040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\apiserver.crt.5c257261: {Name:mkec2e5c66c5ff9596ae8768f9b561f531f3ccc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:38:01.281332    8040 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\apiserver.key.5c257261 ...
	I0229 19:38:01.281332    8040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\apiserver.key.5c257261: {Name:mkb3434c545c6cbea06135684157430b3fc076ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:38:01.282334    8040 certs.go:337] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\apiserver.crt.5c257261 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\apiserver.crt
	I0229 19:38:01.292349    8040 certs.go:341] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\apiserver.key.5c257261 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\apiserver.key
	I0229 19:38:01.293343    8040 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\proxy-client.key
	I0229 19:38:01.293343    8040 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\proxy-client.crt with IP's: []
	I0229 19:38:01.642256    8040 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\proxy-client.crt ...
	I0229 19:38:01.642256    8040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\proxy-client.crt: {Name:mk30527b5ad9f03ea9fe50c7a0e75b24c3539b08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:38:01.643390    8040 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\proxy-client.key ...
	I0229 19:38:01.644397    8040 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\proxy-client.key: {Name:mkc6fe3a91f3e266f33b067345918995790b666f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:38:01.655547    8040 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem (1338 bytes)
	W0229 19:38:01.656526    8040 certs.go:433] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356_empty.pem, impossibly tiny 0 bytes
	I0229 19:38:01.656526    8040 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0229 19:38:01.656823    8040 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0229 19:38:01.656988    8040 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0229 19:38:01.657176    8040 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0229 19:38:01.657339    8040 certs.go:437] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem (1708 bytes)
	I0229 19:38:01.658655    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0229 19:38:01.712919    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0229 19:38:01.761230    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0229 19:38:01.818113    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kubernetes-upgrade-800700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0229 19:38:01.865813    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0229 19:38:01.914897    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0229 19:38:01.966782    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0229 19:38:02.013991    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0229 19:38:02.062039    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\4356.pem --> /usr/share/ca-certificates/4356.pem (1338 bytes)
	I0229 19:38:02.111202    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /usr/share/ca-certificates/43562.pem (1708 bytes)
	I0229 19:38:02.163754    8040 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0229 19:38:02.212858    8040 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0229 19:38:02.258978    8040 ssh_runner.go:195] Run: openssl version
	I0229 19:38:02.279484    8040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0229 19:38:02.312797    8040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:38:02.320630    8040 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 29 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:38:02.331118    8040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0229 19:38:02.357384    8040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0229 19:38:02.387728    8040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4356.pem && ln -fs /usr/share/ca-certificates/4356.pem /etc/ssl/certs/4356.pem"
	I0229 19:38:02.421612    8040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4356.pem
	I0229 19:38:02.428835    8040 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 29 17:55 /usr/share/ca-certificates/4356.pem
	I0229 19:38:02.438260    8040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4356.pem
	I0229 19:38:02.456340    8040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4356.pem /etc/ssl/certs/51391683.0"
	I0229 19:38:02.487677    8040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43562.pem && ln -fs /usr/share/ca-certificates/43562.pem /etc/ssl/certs/43562.pem"
	I0229 19:38:02.518277    8040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43562.pem
	I0229 19:38:02.526346    8040 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 29 17:55 /usr/share/ca-certificates/43562.pem
	I0229 19:38:02.535054    8040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43562.pem
	I0229 19:38:02.553403    8040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43562.pem /etc/ssl/certs/3ec20f2e.0"
	I0229 19:38:02.584039    8040 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0229 19:38:02.591578    8040 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0229 19:38:02.591960    8040 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-800700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.16.0 ClusterName:kubernetes-upgrade-800700 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.26.56.26 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:38:02.598890    8040 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0229 19:38:02.639072    8040 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0229 19:38:02.668510    8040 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0229 19:38:02.702920    8040 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:38:02.726719    8040 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:38:02.726719    8040 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 19:38:02.945743    8040 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:38:02.946776    8040 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:38:03.474630    8040 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:38:03.475719    8040 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:38:03.475719    8040 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:38:03.712359    8040 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:38:03.716834    8040 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:38:03.729034    8040 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:38:03.923268    8040 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:38:03.924438    8040 out.go:204]   - Generating certificates and keys ...
	I0229 19:38:03.924536    8040 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:38:03.924874    8040 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:38:04.276059    8040 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0229 19:38:04.442703    8040 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0229 19:38:04.843259    8040 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0229 19:38:04.985914    8040 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0229 19:38:05.119284    8040 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0229 19:38:05.119359    8040 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-800700 localhost] and IPs [172.26.56.26 127.0.0.1 ::1]
	I0229 19:38:05.277694    8040 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0229 19:38:05.277694    8040 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-800700 localhost] and IPs [172.26.56.26 127.0.0.1 ::1]
	I0229 19:38:05.399255    8040 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0229 19:38:05.727087    8040 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0229 19:38:05.871953    8040 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0229 19:38:05.872228    8040 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:38:06.335052    8040 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:38:06.712869    8040 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:38:06.882043    8040 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:38:07.157226    8040 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:38:07.158431    8040 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:38:07.159234    8040 out.go:204]   - Booting up control plane ...
	I0229 19:38:07.159586    8040 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:38:07.170732    8040 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:38:07.171997    8040 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:38:07.173499    8040 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:38:07.178238    8040 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:38:47.176983    8040 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:38:47.177559    8040 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:38:47.178892    8040 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:38:52.179055    8040 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:38:52.179545    8040 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:39:02.178618    8040 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:39:02.179072    8040 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:39:22.180782    8040 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:39:22.181268    8040 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:40:02.181218    8040 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:40:02.181809    8040 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:40:02.181877    8040 kubeadm.go:322] 
	I0229 19:40:02.181965    8040 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:40:02.182186    8040 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:40:02.182277    8040 kubeadm.go:322] 
	I0229 19:40:02.182277    8040 kubeadm.go:322] This error is likely caused by:
	I0229 19:40:02.182377    8040 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:40:02.182636    8040 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:40:02.182702    8040 kubeadm.go:322] 
	I0229 19:40:02.182907    8040 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:40:02.182977    8040 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:40:02.183111    8040 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:40:02.183111    8040 kubeadm.go:322] 
	I0229 19:40:02.183409    8040 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:40:02.183608    8040 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:40:02.183878    8040 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:40:02.183996    8040 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:40:02.184190    8040 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:40:02.184326    8040 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 19:40:02.185861    8040 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 19:40:02.186338    8040 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 19:40:02.186534    8040 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:40:02.186534    8040 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:40:02.186534    8040 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0229 19:40:02.187125    8040 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-800700 localhost] and IPs [172.26.56.26 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-800700 localhost] and IPs [172.26.56.26 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-800700 localhost] and IPs [172.26.56.26 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-800700 localhost] and IPs [172.26.56.26 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0229 19:40:02.187374    8040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0229 19:40:02.867428    8040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:40:02.905513    8040 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0229 19:40:02.925517    8040 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0229 19:40:02.925517    8040 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0229 19:40:03.134071    8040 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0229 19:40:03.185470    8040 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I0229 19:40:03.291934    8040 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0229 19:42:00.923063    8040 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0229 19:42:00.923401    8040 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0229 19:42:00.925378    8040 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0229 19:42:00.925528    8040 kubeadm.go:322] [preflight] Running pre-flight checks
	I0229 19:42:00.925744    8040 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0229 19:42:00.925977    8040 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0229 19:42:00.926334    8040 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0229 19:42:00.926541    8040 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0229 19:42:00.926767    8040 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0229 19:42:00.926881    8040 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0229 19:42:00.927047    8040 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0229 19:42:00.946290    8040 out.go:204]   - Generating certificates and keys ...
	I0229 19:42:00.946706    8040 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0229 19:42:00.946862    8040 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0229 19:42:00.947010    8040 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0229 19:42:00.947087    8040 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0229 19:42:00.947222    8040 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0229 19:42:00.947424    8040 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0229 19:42:00.947537    8040 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0229 19:42:00.947932    8040 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0229 19:42:00.947932    8040 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0229 19:42:00.947932    8040 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0229 19:42:00.947932    8040 kubeadm.go:322] [certs] Using the existing "sa" key
	I0229 19:42:00.947932    8040 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0229 19:42:00.947932    8040 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0229 19:42:00.947932    8040 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0229 19:42:00.948516    8040 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0229 19:42:00.948588    8040 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0229 19:42:00.948689    8040 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0229 19:42:00.949166    8040 out.go:204]   - Booting up control plane ...
	I0229 19:42:00.949324    8040 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0229 19:42:00.949653    8040 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0229 19:42:00.949849    8040 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0229 19:42:00.950042    8040 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0229 19:42:00.950470    8040 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0229 19:42:00.950604    8040 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0229 19:42:00.950771    8040 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:42:00.951128    8040 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:42:00.951348    8040 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:42:00.951790    8040 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:42:00.951973    8040 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:42:00.952053    8040 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:42:00.952053    8040 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:42:00.952914    8040 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:42:00.953083    8040 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0229 19:42:00.953470    8040 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0229 19:42:00.953533    8040 kubeadm.go:322] 
	I0229 19:42:00.953586    8040 kubeadm.go:322] Unfortunately, an error has occurred:
	I0229 19:42:00.953645    8040 kubeadm.go:322] 	timed out waiting for the condition
	I0229 19:42:00.953645    8040 kubeadm.go:322] 
	I0229 19:42:00.953821    8040 kubeadm.go:322] This error is likely caused by:
	I0229 19:42:00.953905    8040 kubeadm.go:322] 	- The kubelet is not running
	I0229 19:42:00.954175    8040 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0229 19:42:00.954234    8040 kubeadm.go:322] 
	I0229 19:42:00.954397    8040 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0229 19:42:00.954397    8040 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0229 19:42:00.954397    8040 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0229 19:42:00.954397    8040 kubeadm.go:322] 
	I0229 19:42:00.955170    8040 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0229 19:42:00.955170    8040 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0229 19:42:00.955711    8040 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0229 19:42:00.955851    8040 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0229 19:42:00.955851    8040 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0229 19:42:00.955851    8040 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0229 19:42:00.956387    8040 kubeadm.go:406] StartCluster complete in 3m58.3511774s
	I0229 19:42:00.964993    8040 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0229 19:42:00.998772    8040 logs.go:276] 0 containers: []
	W0229 19:42:00.998832    8040 logs.go:278] No container was found matching "kube-apiserver"
	I0229 19:42:01.006096    8040 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0229 19:42:01.031246    8040 logs.go:276] 0 containers: []
	W0229 19:42:01.031246    8040 logs.go:278] No container was found matching "etcd"
	I0229 19:42:01.038211    8040 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0229 19:42:01.061412    8040 logs.go:276] 0 containers: []
	W0229 19:42:01.061412    8040 logs.go:278] No container was found matching "coredns"
	I0229 19:42:01.069210    8040 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0229 19:42:01.098371    8040 logs.go:276] 0 containers: []
	W0229 19:42:01.103802    8040 logs.go:278] No container was found matching "kube-scheduler"
	I0229 19:42:01.111148    8040 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0229 19:42:01.140050    8040 logs.go:276] 0 containers: []
	W0229 19:42:01.140160    8040 logs.go:278] No container was found matching "kube-proxy"
	I0229 19:42:01.148751    8040 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0229 19:42:01.181206    8040 logs.go:276] 0 containers: []
	W0229 19:42:01.181206    8040 logs.go:278] No container was found matching "kube-controller-manager"
	I0229 19:42:01.188718    8040 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0229 19:42:01.229980    8040 logs.go:276] 0 containers: []
	W0229 19:42:01.229980    8040 logs.go:278] No container was found matching "kindnet"
	I0229 19:42:01.229980    8040 logs.go:123] Gathering logs for kubelet ...
	I0229 19:42:01.230054    8040 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0229 19:42:01.310368    8040 logs.go:123] Gathering logs for dmesg ...
	I0229 19:42:01.310368    8040 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0229 19:42:01.334711    8040 logs.go:123] Gathering logs for describe nodes ...
	I0229 19:42:01.334785    8040 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0229 19:42:01.430687    8040 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0229 19:42:01.430687    8040 logs.go:123] Gathering logs for Docker ...
	I0229 19:42:01.430687    8040 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0229 19:42:01.482021    8040 logs.go:123] Gathering logs for container status ...
	I0229 19:42:01.482021    8040 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0229 19:42:01.583089    8040 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0229 19:42:01.583191    8040 out.go:239] * 
	* 
	W0229 19:42:01.583191    8040 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:42:01.583608    8040 out.go:239] * 
	* 
	W0229 19:42:01.584803    8040 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 19:42:01.599527    8040 out.go:177] 
	W0229 19:42:01.600298    8040 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0229 19:42:01.600484    8040 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0229 19:42:01.600590    8040 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0229 19:42:01.644656    8040 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-800700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-800700
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-800700: (22.0483257s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-800700 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-800700 status --format={{.Host}}: exit status 7 (2.2390143s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:42:24.295883   10592 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-800700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-800700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (5m37.0313526s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-800700 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-800700 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-800700 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv: exit status 106 (230.0292ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-800700] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:48:03.738413    6336 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-800700
	    minikube start -p kubernetes-upgrade-800700 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8007002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-800700 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-800700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-800700 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (5m15.2126088s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-02-29 19:53:19.0912299 +0000 UTC m=+8089.980684001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-800700 -n kubernetes-upgrade-800700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-800700 -n kubernetes-upgrade-800700: (13.7967161s)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-800700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-800700 logs -n 25: (8.8495579s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args                |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p cilium-863900                  | cilium-863900             | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:31 UTC | 29 Feb 24 19:31 UTC |
	| start   | -p force-systemd-env-090200       | force-systemd-env-090200  | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:31 UTC | 29 Feb 24 19:40 UTC |
	|         | --memory=2048                     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5            |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| delete  | -p offline-docker-863600          | offline-docker-863600     | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:35 UTC | 29 Feb 24 19:35 UTC |
	| start   | -p force-systemd-flag-584100      | force-systemd-flag-584100 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:35 UTC | 29 Feb 24 19:42 UTC |
	|         | --memory=2048 --force-systemd     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5            |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| ssh     | docker-flags-012500 ssh           | docker-flags-012500       | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:36 UTC | 29 Feb 24 19:36 UTC |
	|         | sudo systemctl show docker        |                           |                   |         |                     |                     |
	|         | --property=Environment            |                           |                   |         |                     |                     |
	|         | --no-pager                        |                           |                   |         |                     |                     |
	| ssh     | docker-flags-012500 ssh           | docker-flags-012500       | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:36 UTC | 29 Feb 24 19:36 UTC |
	|         | sudo systemctl show docker        |                           |                   |         |                     |                     |
	|         | --property=ExecStart              |                           |                   |         |                     |                     |
	|         | --no-pager                        |                           |                   |         |                     |                     |
	| delete  | -p docker-flags-012500            | docker-flags-012500       | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:36 UTC | 29 Feb 24 19:37 UTC |
	| start   | -p stopped-upgrade-829200         | minikube                  | minikube5\jenkins | v1.26.0 | 29 Feb 24 19:37 GMT | 29 Feb 24 19:44 GMT |
	|         | --memory=2200                     |                           |                   |         |                     |                     |
	|         | --vm-driver=hyperv                |                           |                   |         |                     |                     |
	| ssh     | force-systemd-env-090200          | force-systemd-env-090200  | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:40 UTC | 29 Feb 24 19:40 UTC |
	|         | ssh docker info --format          |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                 |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-env-090200       | force-systemd-env-090200  | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:40 UTC | 29 Feb 24 19:41 UTC |
	| start   | -p running-upgrade-764600         | minikube                  | minikube5\jenkins | v1.26.0 | 29 Feb 24 19:41 GMT | 29 Feb 24 19:46 GMT |
	|         | --memory=2200                     |                           |                   |         |                     |                     |
	|         | --vm-driver=hyperv                |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-800700      | kubernetes-upgrade-800700 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:42 UTC | 29 Feb 24 19:42 UTC |
	| start   | -p kubernetes-upgrade-800700      | kubernetes-upgrade-800700 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:42 UTC | 29 Feb 24 19:48 UTC |
	|         | --memory=2200                     |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| ssh     | force-systemd-flag-584100         | force-systemd-flag-584100 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:42 UTC | 29 Feb 24 19:42 UTC |
	|         | ssh docker info --format          |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}                 |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-flag-584100      | force-systemd-flag-584100 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:42 UTC | 29 Feb 24 19:43 UTC |
	| start   | -p cert-expiration-587600         | cert-expiration-587600    | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:43 UTC | 29 Feb 24 19:50 UTC |
	|         | --memory=2048                     |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m              |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| stop    | stopped-upgrade-829200 stop       | minikube                  | minikube5\jenkins | v1.26.0 | 29 Feb 24 19:44 GMT | 29 Feb 24 19:45 GMT |
	| start   | -p stopped-upgrade-829200         | stopped-upgrade-829200    | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:45 UTC | 29 Feb 24 19:51 UTC |
	|         | --memory=2200                     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-764600         | running-upgrade-764600    | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:46 UTC | 29 Feb 24 19:53 UTC |
	|         | --memory=2200                     |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-800700      | kubernetes-upgrade-800700 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:48 UTC |                     |
	|         | --memory=2200                     |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-800700      | kubernetes-upgrade-800700 | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:48 UTC | 29 Feb 24 19:53 UTC |
	|         | --memory=2200                     |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1            |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-829200         | stopped-upgrade-829200    | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:51 UTC | 29 Feb 24 19:52 UTC |
	| start   | -p pause-027300 --memory=2048     | pause-027300              | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:52 UTC |                     |
	|         | --install-addons=false            |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv        |                           |                   |         |                     |                     |
	| start   | -p cert-expiration-587600         | cert-expiration-587600    | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:53 UTC |                     |
	|         | --memory=2048                     |                           |                   |         |                     |                     |
	|         | --cert-expiration=8760h           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                   |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-764600         | running-upgrade-764600    | minikube5\jenkins | v1.32.0 | 29 Feb 24 19:53 UTC |                     |
	|---------|-----------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 19:53:12
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 19:53:12.744643   10528 out.go:291] Setting OutFile to fd 1380 ...
	I0229 19:53:12.745390   10528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:53:12.745390   10528 out.go:304] Setting ErrFile to fd 1256...
	I0229 19:53:12.745390   10528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:53:12.774626   10528 out.go:298] Setting JSON to false
	I0229 19:53:12.782405   10528 start.go:129] hostinfo: {"hostname":"minikube5","uptime":58129,"bootTime":1709178263,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 19:53:12.782405   10528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 19:53:12.783626   10528 out.go:177] * [cert-expiration-587600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 19:53:12.785234   10528 notify.go:220] Checking for updates...
	I0229 19:53:12.785919   10528 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:53:12.786562   10528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 19:53:12.787890   10528 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 19:53:12.788335   10528 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 19:53:12.789053   10528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 19:53:10.462913    8412 main.go:141] libmachine: [stdout =====>] : 
	Name         State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----         ----- ----------- ----------------- ------   ------             -------
	pause-027300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 19:53:10.462977    8412 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:10.462977    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName pause-027300 -DynamicMemoryEnabled $false
	I0229 19:53:12.738712    8412 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:53:12.738712    8412 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:12.738712    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor pause-027300 -Count 2
	I0229 19:53:08.835227   10224 api_server.go:279] https://172.26.57.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 19:53:08.840680   10224 api_server.go:103] status: https://172.26.57.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 19:53:08.840741   10224 api_server.go:253] Checking apiserver healthz at https://172.26.57.211:8443/healthz ...
	I0229 19:53:10.344159   13548 kapi.go:59] client config for kubernetes-upgrade-800700: &rest.Config{Host:"https://172.26.63.14:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-800700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-800700\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil),
KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 19:53:10.344159   13548 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:53:10.344698   13548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:53:10.344698   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:53:10.345467   13548 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-800700"
	W0229 19:53:10.345539   13548 addons.go:243] addon default-storageclass should already be in state true
	I0229 19:53:10.345539   13548 host.go:66] Checking if "kubernetes-upgrade-800700" exists ...
	I0229 19:53:10.346546   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:53:12.652929   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:53:12.652985   13548 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:12.653264   13548 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:53:12.653317   13548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:53:12.653440   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-800700 ).state
	I0229 19:53:12.693318   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:53:12.693318   13548 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:12.693318   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:53:13.848794   10224 api_server.go:269] stopped: https://172.26.57.211:8443/healthz: Get "https://172.26.57.211:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0229 19:53:13.848873   10224 api_server.go:253] Checking apiserver healthz at https://172.26.57.211:8443/healthz ...
	I0229 19:53:15.078788   10224 api_server.go:279] https://172.26.57.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 19:53:15.083675   10224 api_server.go:103] status: https://172.26.57.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 19:53:15.083675   10224 api_server.go:253] Checking apiserver healthz at https://172.26.57.211:8443/healthz ...
	I0229 19:53:15.255160   10224 api_server.go:279] https://172.26.57.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 19:53:15.260872   10224 api_server.go:103] status: https://172.26.57.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 19:53:15.260872   10224 api_server.go:253] Checking apiserver healthz at https://172.26.57.211:8443/healthz ...
	I0229 19:53:15.326696   10224 api_server.go:279] https://172.26.57.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0229 19:53:15.326696   10224 api_server.go:103] status: https://172.26.57.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0229 19:53:15.629860   10224 api_server.go:253] Checking apiserver healthz at https://172.26.57.211:8443/healthz ...
	I0229 19:53:15.638582   10224 api_server.go:279] https://172.26.57.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0229 19:53:15.638582   10224 api_server.go:103] status: https://172.26.57.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0229 19:53:16.140362   10224 api_server.go:253] Checking apiserver healthz at https://172.26.57.211:8443/healthz ...
	I0229 19:53:16.155930   10224 api_server.go:279] https://172.26.57.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0229 19:53:16.155930   10224 api_server.go:103] status: https://172.26.57.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0229 19:53:16.640324   10224 api_server.go:253] Checking apiserver healthz at https://172.26.57.211:8443/healthz ...
	I0229 19:53:16.651630   10224 api_server.go:279] https://172.26.57.211:8443/healthz returned 200:
	ok
	I0229 19:53:16.664476   10224 api_server.go:141] control plane version: v1.24.1
	I0229 19:53:16.664476   10224 api_server.go:131] duration metric: took 15.5409192s to wait for apiserver health ...
	I0229 19:53:16.664578   10224 cni.go:84] Creating CNI manager for ""
	I0229 19:53:16.664578   10224 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 19:53:16.665394   10224 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0229 19:53:16.679858   10224 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0229 19:53:16.698771   10224 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0229 19:53:16.794491   10224 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:53:16.810434   10224 system_pods.go:59] 7 kube-system pods found
	I0229 19:53:16.810552   10224 system_pods.go:61] "coredns-6d4b75cb6d-2s85q" [fa660979-cb29-4e72-bb70-cb1854c47af6] Running
	I0229 19:53:16.810552   10224 system_pods.go:61] "etcd-running-upgrade-764600" [cd9285e2-ce78-4553-a742-739cc79bd208] Running
	I0229 19:53:16.810552   10224 system_pods.go:61] "kube-apiserver-running-upgrade-764600" [43622952-3f37-4495-beec-bad7e224d3c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 19:53:16.810552   10224 system_pods.go:61] "kube-controller-manager-running-upgrade-764600" [39ca72e5-fc24-4eba-80e8-e57f43a9ac54] Running
	I0229 19:53:16.810613   10224 system_pods.go:61] "kube-proxy-f9n4h" [b4ac66ef-e7cf-4f0b-825b-ad7feb481a92] Running
	I0229 19:53:16.810613   10224 system_pods.go:61] "kube-scheduler-running-upgrade-764600" [3a4536e6-5d4a-4157-93ef-260d6260ddf5] Running
	I0229 19:53:16.810672   10224 system_pods.go:61] "storage-provisioner" [fa49659c-db8c-4a82-a362-f49a6618862c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 19:53:16.810672   10224 system_pods.go:74] duration metric: took 16.132ms to wait for pod list to return data ...
	I0229 19:53:16.810672   10224 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:53:16.818079   10224 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0229 19:53:16.818191   10224 node_conditions.go:123] node cpu capacity is 2
	I0229 19:53:16.818191   10224 node_conditions.go:105] duration metric: took 7.4669ms to run NodePressure ...
	I0229 19:53:16.818247   10224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0229 19:53:17.435415   10224 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0229 19:53:17.483849   10224 ops.go:34] apiserver oom_adj: -16
	I0229 19:53:17.483849   10224 kubeadm.go:640] restartCluster took 25.664188s
	I0229 19:53:17.483849   10224 kubeadm.go:406] StartCluster complete in 25.7386681s
	I0229 19:53:17.483849   10224 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:53:17.483849   10224 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:53:17.485957   10224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:53:17.487233   10224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0229 19:53:17.487291   10224 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0229 19:53:17.487708   10224 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-764600"
	I0229 19:53:17.487708   10224 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-764600"
	I0229 19:53:17.487762   10224 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-764600"
	W0229 19:53:17.487808   10224 addons.go:243] addon storage-provisioner should already be in state true
	I0229 19:53:17.487808   10224 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-764600"
	I0229 19:53:17.487945   10224 host.go:66] Checking if "running-upgrade-764600" exists ...
	I0229 19:53:17.487945   10224 config.go:182] Loaded profile config "running-upgrade-764600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0229 19:53:17.488666   10224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-764600 ).state
	I0229 19:53:17.489712   10224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-764600 ).state
	I0229 19:53:17.509965   10224 kapi.go:59] client config for running-upgrade-764600: &rest.Config{Host:"https://172.26.57.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\running-upgrade-764600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\running-upgrade-764600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData
:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 19:53:17.537487   10224 kapi.go:248] "coredns" deployment in "kube-system" namespace and "running-upgrade-764600" context rescaled to 1 replicas
	I0229 19:53:17.537661   10224 start.go:223] Will wait 6m0s for node &{Name: IP:172.26.57.211 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 19:53:17.538874   10224 out.go:177] * Verifying Kubernetes components...
	I0229 19:53:12.790277   10528 config.go:182] Loaded profile config "cert-expiration-587600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:53:12.791484   10528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 19:53:15.204444    8412 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:53:15.204444    8412 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:15.204510    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName pause-027300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-027300\boot2docker.iso'
	I0229 19:53:18.121252    8412 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:53:18.121252    8412 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:18.135914    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName pause-027300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-027300\disk.vhd'
	I0229 19:53:17.551954   10224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:53:18.334000   10224 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0229 19:53:18.334000   10224 api_server.go:52] waiting for apiserver process to appear ...
	I0229 19:53:18.352513   10224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:53:18.402297   10224 api_server.go:72] duration metric: took 864.483ms to wait for apiserver process to appear ...
	I0229 19:53:18.402394   10224 api_server.go:88] waiting for apiserver healthz status ...
	I0229 19:53:18.402394   10224 api_server.go:253] Checking apiserver healthz at https://172.26.57.211:8443/healthz ...
	I0229 19:53:18.426883   10224 api_server.go:279] https://172.26.57.211:8443/healthz returned 200:
	ok
	I0229 19:53:18.429088   10224 api_server.go:141] control plane version: v1.24.1
	I0229 19:53:18.429088   10224 api_server.go:131] duration metric: took 26.6919ms to wait for apiserver health ...
	I0229 19:53:18.429088   10224 system_pods.go:43] waiting for kube-system pods to appear ...
	I0229 19:53:18.452047   10224 system_pods.go:59] 7 kube-system pods found
	I0229 19:53:18.452047   10224 system_pods.go:61] "coredns-6d4b75cb6d-2s85q" [fa660979-cb29-4e72-bb70-cb1854c47af6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0229 19:53:18.452047   10224 system_pods.go:61] "etcd-running-upgrade-764600" [cd9285e2-ce78-4553-a742-739cc79bd208] Running
	I0229 19:53:18.452047   10224 system_pods.go:61] "kube-apiserver-running-upgrade-764600" [43622952-3f37-4495-beec-bad7e224d3c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0229 19:53:18.452047   10224 system_pods.go:61] "kube-controller-manager-running-upgrade-764600" [39ca72e5-fc24-4eba-80e8-e57f43a9ac54] Running
	I0229 19:53:18.452047   10224 system_pods.go:61] "kube-proxy-f9n4h" [b4ac66ef-e7cf-4f0b-825b-ad7feb481a92] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0229 19:53:18.452047   10224 system_pods.go:61] "kube-scheduler-running-upgrade-764600" [3a4536e6-5d4a-4157-93ef-260d6260ddf5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0229 19:53:18.452587   10224 system_pods.go:61] "storage-provisioner" [fa49659c-db8c-4a82-a362-f49a6618862c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0229 19:53:18.452587   10224 system_pods.go:74] duration metric: took 23.4981ms to wait for pod list to return data ...
	I0229 19:53:18.452587   10224 kubeadm.go:581] duration metric: took 914.7707ms to wait for : map[apiserver:true system_pods:true] ...
	I0229 19:53:18.452671   10224 node_conditions.go:102] verifying NodePressure condition ...
	I0229 19:53:18.459541   10224 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0229 19:53:18.459599   10224 node_conditions.go:123] node cpu capacity is 2
	I0229 19:53:18.459599   10224 node_conditions.go:105] duration metric: took 6.9277ms to run NodePressure ...
	I0229 19:53:18.459662   10224 start.go:228] waiting for startup goroutines ...
	I0229 19:53:15.202403   13548 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:53:15.202465   13548 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:15.202662   13548 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-800700 ).networkadapters[0]).ipaddresses[0]
	I0229 19:53:15.670413   13548 main.go:141] libmachine: [stdout =====>] : 172.26.63.14
	
	I0229 19:53:15.670413   13548 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:15.671214   13548 sshutil.go:53] new ssh client: &{IP:172.26.63.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-800700\id_rsa Username:docker}
	I0229 19:53:15.858331   13548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:53:16.869417   13548 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.0110293s)
	I0229 19:53:18.200360   13548 main.go:141] libmachine: [stdout =====>] : 172.26.63.14
	
	I0229 19:53:18.200414   13548 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:18.200752   13548 sshutil.go:53] new ssh client: &{IP:172.26.63.14 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kubernetes-upgrade-800700\id_rsa Username:docker}
	I0229 19:53:18.383212   13548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:53:18.815435   13548 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 19:53:18.988665   10528 out.go:177] * Using the hyperv driver based on existing profile
	I0229 19:53:18.816370   13548 addons.go:505] enable addons completed in 10.7488913s: enabled=[storage-provisioner default-storageclass]
	I0229 19:53:18.816370   13548 start.go:233] waiting for cluster config update ...
	I0229 19:53:18.816370   13548 start.go:242] writing updated cluster config ...
	I0229 19:53:18.831191   13548 ssh_runner.go:195] Run: rm -f paused
	I0229 19:53:19.022470   13548 start.go:601] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0229 19:53:19.025046   13548 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-800700" cluster and "default" namespace by default
	I0229 19:53:18.989666   10528 start.go:299] selected driver: hyperv
	I0229 19:53:18.989666   10528 start.go:903] validating driver "hyperv" against &{Name:cert-expiration-587600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-587600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.26.49.189 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:53:18.989876   10528 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 19:53:19.052748   10528 cni.go:84] Creating CNI manager for ""
	I0229 19:53:19.052748   10528 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 19:53:19.052748   10528 start_flags.go:323] config:
	{Name:cert-expiration-587600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-587600 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.26.49.189 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:53:19.052748   10528 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:53:19.052748   10528 out.go:177] * Starting control plane node cert-expiration-587600 in cluster cert-expiration-587600
	I0229 19:53:20.720930   10224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:53:20.720930   10224 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:20.722947   10224 kapi.go:59] client config for running-upgrade-764600: &rest.Config{Host:"https://172.26.57.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\running-upgrade-764600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\running-upgrade-764600\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData
:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ff0600), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0229 19:53:20.723930   10224 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-764600"
	W0229 19:53:20.724014   10224 addons.go:243] addon default-storageclass should already be in state true
	I0229 19:53:20.724098   10224 host.go:66] Checking if "running-upgrade-764600" exists ...
	I0229 19:53:20.724952   10224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-764600 ).state
	I0229 19:53:20.762516   10224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:53:20.762613   10224 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:20.763274   10224 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 19:53:19.056770   10528 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 19:53:19.056770   10528 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 19:53:19.056770   10528 cache.go:56] Caching tarball of preloaded images
	I0229 19:53:19.056770   10528 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 19:53:19.056770   10528 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 19:53:19.057881   10528 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\cert-expiration-587600\config.json ...
	I0229 19:53:19.059508   10528 start.go:365] acquiring machines lock for cert-expiration-587600: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 19:53:21.229428    8412 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:53:21.229428    8412 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:21.229428    8412 main.go:141] libmachine: Starting VM...
	I0229 19:53:21.229902    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM pause-027300
	I0229 19:53:20.764251   10224 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:53:20.764343   10224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0229 19:53:20.764389   10224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-764600 ).state
	I0229 19:53:23.102477   10224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:53:23.102553   10224 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:23.102553   10224 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0229 19:53:23.102553   10224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0229 19:53:23.102553   10224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-764600 ).state
	I0229 19:53:23.150119   10224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:53:23.158182   10224 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:23.158260   10224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-764600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:53:24.439234    8412 main.go:141] libmachine: [stdout =====>] : 
	I0229 19:53:24.439234    8412 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:24.439234    8412 main.go:141] libmachine: Waiting for host to start...
	I0229 19:53:24.439234    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-027300 ).state
	I0229 19:53:26.786626    8412 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:53:26.786626    8412 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:26.789495    8412 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-027300 ).networkadapters[0]).ipaddresses[0]
	I0229 19:53:25.523654   10224 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:53:25.523654   10224 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:25.523654   10224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-764600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:53:25.997529   10224 main.go:141] libmachine: [stdout =====>] : 172.26.57.211
	
	I0229 19:53:25.997613   10224 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:25.997613   10224 sshutil.go:53] new ssh client: &{IP:172.26.57.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\running-upgrade-764600\id_rsa Username:docker}
	I0229 19:53:26.153884   10224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0229 19:53:27.407086   10224 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.2531319s)
	I0229 19:53:28.314906   10224 main.go:141] libmachine: [stdout =====>] : 172.26.57.211
	
	I0229 19:53:28.314906   10224 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:53:28.315770   10224 sshutil.go:53] new ssh client: &{IP:172.26.57.211 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\running-upgrade-764600\id_rsa Username:docker}
	I0229 19:53:28.461669   10224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0229 19:53:28.730700   10224 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0229 19:53:28.730918   10224 addons.go:505] enable addons completed in 11.2430599s: enabled=[storage-provisioner default-storageclass]
	I0229 19:53:28.730918   10224 start.go:233] waiting for cluster config update ...
	I0229 19:53:28.731531   10224 start.go:242] writing updated cluster config ...
	I0229 19:53:28.745165   10224 ssh_runner.go:195] Run: rm -f paused
	I0229 19:53:28.901838   10224 start.go:601] kubectl: 1.29.2, cluster: 1.24.1 (minor skew: 5)
	I0229 19:53:28.902634   10224 out.go:177] 
	W0229 19:53:28.903313   10224 out.go:239] ! C:\Program Files\Docker\Docker\resources\bin\kubectl.exe is version 1.29.2, which may have incompatibilities with Kubernetes 1.24.1.
	I0229 19:53:28.903973   10224 out.go:177]   - Want kubectl v1.24.1? Try 'minikube kubectl -- get pods -A'
	I0229 19:53:28.904726   10224 out.go:177] * Done! kubectl is now configured to use "running-upgrade-764600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 29 19:53:02 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:02.957854757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:53:02 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:02.957983570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:53:02 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:02.984915899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 19:53:02 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:02.985211328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 19:53:02 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:02.985332640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:53:02 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:02.985657572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.061373256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.061510169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.061526971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.061709488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 cri-dockerd[6613]: time="2024-02-29T19:53:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/32f5edbc40cd3772e7b964661e00a0c7dce465412bd41b482c7ba0b60d6d9093/resolv.conf as [nameserver 172.26.48.1]"
	Feb 29 19:53:03 kubernetes-upgrade-800700 cri-dockerd[6613]: time="2024-02-29T19:53:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac9f93c84106b9e6f6d218f50bf071af234daf6f0e682ffcee752ec973d3c81f/resolv.conf as [nameserver 172.26.48.1]"
	Feb 29 19:53:03 kubernetes-upgrade-800700 cri-dockerd[6613]: time="2024-02-29T19:53:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1bba973fcb7ac8a4e83a1fdb7bbd622d1c07752bf16a83eb112bd6b2c7cbd1ab/resolv.conf as [nameserver 172.26.48.1]"
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.601850497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.602073018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.602087719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.602184728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.614330370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.614430679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.614445581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.614562992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.635172829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.636470451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.636641567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 29 19:53:03 kubernetes-upgrade-800700 dockerd[6356]: time="2024-02-29T19:53:03.637061207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9ed70e11e6733       6e38f40d628db       38 seconds ago      Running             storage-provisioner       1                   1bba973fcb7ac       storage-provisioner
	46161581b9764       cbb01a7bd410d       38 seconds ago      Running             coredns                   1                   ac9f93c84106b       coredns-76f75df574-7swm2
	b56672c0067aa       cc0a4f00aad7b       38 seconds ago      Running             kube-proxy                1                   32f5edbc40cd3       kube-proxy-cnkv4
	748d5a892900d       4270645ed6b7a       39 seconds ago      Running             kube-scheduler            1                   9675365c0609c       kube-scheduler-kubernetes-upgrade-800700
	6efd56ff5c879       bbb47a0f83324       39 seconds ago      Running             kube-apiserver            1                   7932a798d86da       kube-apiserver-kubernetes-upgrade-800700
	2d8e8ea79b577       d4e01cdf63970       39 seconds ago      Running             kube-controller-manager   1                   71b5b61817dfe       kube-controller-manager-kubernetes-upgrade-800700
	2dc2210127d7e       a0eed15eed449       39 seconds ago      Running             etcd                      1                   e6b2fa9b12e37       etcd-kubernetes-upgrade-800700
	a16e3f52e1576       6e38f40d628db       5 minutes ago       Exited              storage-provisioner       0                   1f8a39433890f       storage-provisioner
	f6c2b70aae893       cc0a4f00aad7b       5 minutes ago       Exited              kube-proxy                0                   7c8af6e267dcc       kube-proxy-cnkv4
	dcdfaa1ef4163       cbb01a7bd410d       5 minutes ago       Exited              coredns                   0                   f90d033552b05       coredns-76f75df574-7swm2
	3919587f983d6       4270645ed6b7a       5 minutes ago       Exited              kube-scheduler            0                   55b0dbc80e6a0       kube-scheduler-kubernetes-upgrade-800700
	36afed20177d2       bbb47a0f83324       5 minutes ago       Exited              kube-apiserver            0                   0fb7953e26f41       kube-apiserver-kubernetes-upgrade-800700
	f8fd99b59c287       d4e01cdf63970       5 minutes ago       Exited              kube-controller-manager   0                   a1cce2c486089       kube-controller-manager-kubernetes-upgrade-800700
	1bd172af56599       a0eed15eed449       5 minutes ago       Exited              etcd                      0                   fcc182b61a734       etcd-kubernetes-upgrade-800700
	
	
	==> coredns [46161581b976] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 09f0998677e0c19d72433bdbc19471218bfe4a8b92405418740861874d1549e73cec4df8f6750d3139464010abec770181315be2b4c8b222ced8b0f05062ec0c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53905 - 23011 "HINFO IN 8847060605680255634.7733535538787729182. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051456255s
	
	
	==> coredns [dcdfaa1ef416] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 09f0998677e0c19d72433bdbc19471218bfe4a8b92405418740861874d1549e73cec4df8f6750d3139464010abec770181315be2b4c8b222ced8b0f05062ec0c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48313 - 22068 "HINFO IN 3174042899476246361.8426818165429667237. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032333151s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[632525688]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Feb-2024 19:48:05.791) (total time: 30000ms):
	Trace[632525688]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:48:35.791)
	Trace[632525688]: [30.000588603s] [30.000588603s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2062397741]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Feb-2024 19:48:05.790) (total time: 30001ms):
	Trace[2062397741]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:48:35.791)
	Trace[2062397741]: [30.001814274s] [30.001814274s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[747590368]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Feb-2024 19:48:05.791) (total time: 30001ms):
	Trace[747590368]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:48:35.792)
	Trace[747590368]: [30.001732654s] [30.001732654s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-800700
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-800700
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Feb 2024 19:47:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-800700
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Feb 2024 19:53:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Feb 2024 19:53:17 +0000   Thu, 29 Feb 2024 19:47:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Feb 2024 19:53:17 +0000   Thu, 29 Feb 2024 19:47:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Feb 2024 19:53:17 +0000   Thu, 29 Feb 2024 19:47:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Feb 2024 19:53:17 +0000   Thu, 29 Feb 2024 19:47:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.26.63.14
	  Hostname:    kubernetes-upgrade-800700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164268Ki
	  pods:               110
	System Info:
	  Machine ID:                 635be3c54cc94bd083c65781ee3076e6
	  System UUID:                85739d25-c85b-b84f-aa4b-aeeb7b081451
	  Boot ID:                    bd244725-b4e5-4397-af06-31b4b46d496a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-7swm2                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m37s
	  kube-system                 etcd-kubernetes-upgrade-800700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m46s
	  kube-system                 kube-apiserver-kubernetes-upgrade-800700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-800700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  kube-system                 kube-proxy-cnkv4                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	  kube-system                 kube-scheduler-kubernetes-upgrade-800700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m35s                  kube-proxy       
	  Normal  Starting                 34s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  5m57s (x8 over 5m57s)  kubelet          Node kubernetes-upgrade-800700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m57s (x8 over 5m57s)  kubelet          Node kubernetes-upgrade-800700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m57s (x7 over 5m57s)  kubelet          Node kubernetes-upgrade-800700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m38s                  node-controller  Node kubernetes-upgrade-800700 event: Registered Node kubernetes-upgrade-800700 in Controller
	  Normal  RegisteredNode           22s                    node-controller  Node kubernetes-upgrade-800700 event: Registered Node kubernetes-upgrade-800700 in Controller
	
	
	==> dmesg <==
	[  +0.503269] systemd-fstab-generator[1051]: Ignoring "noauto" option for root device
	[  +0.190298] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +0.226164] systemd-fstab-generator[1077]: Ignoring "noauto" option for root device
	[  +1.806994] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.202667] systemd-fstab-generator[1248]: Ignoring "noauto" option for root device
	[  +0.203290] systemd-fstab-generator[1260]: Ignoring "noauto" option for root device
	[  +0.271625] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[ +12.111252] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +0.095191] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.687429] systemd-fstab-generator[1775]: Ignoring "noauto" option for root device
	[  +0.100771] kauditd_printk_skb: 51 callbacks suppressed
	[Feb29 19:48] kauditd_printk_skb: 62 callbacks suppressed
	[ +39.853090] kauditd_printk_skb: 51 callbacks suppressed
	[Feb29 19:51] hrtimer: interrupt took 6131051 ns
	[Feb29 19:52] systemd-fstab-generator[5858]: Ignoring "noauto" option for root device
	[  +0.755835] systemd-fstab-generator[5902]: Ignoring "noauto" option for root device
	[  +0.307949] systemd-fstab-generator[5920]: Ignoring "noauto" option for root device
	[  +0.414330] systemd-fstab-generator[5934]: Ignoring "noauto" option for root device
	[  +5.310879] kauditd_printk_skb: 89 callbacks suppressed
	[  +6.960569] systemd-fstab-generator[6561]: Ignoring "noauto" option for root device
	[  +0.211044] systemd-fstab-generator[6573]: Ignoring "noauto" option for root device
	[  +0.219531] systemd-fstab-generator[6585]: Ignoring "noauto" option for root device
	[  +0.301394] systemd-fstab-generator[6600]: Ignoring "noauto" option for root device
	[Feb29 19:53] kauditd_printk_skb: 118 callbacks suppressed
	[  +5.537194] kauditd_printk_skb: 67 callbacks suppressed
	
	
	==> etcd [1bd172af5659] <==
	{"level":"info","ts":"2024-02-29T19:47:46.009623Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a5a2cf80f590a6a5","local-member-id":"b14c3400d736e764","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:47:46.013661Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:47:46.013953Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:49:33.150762Z","caller":"traceutil/trace.go:171","msg":"trace[1648516708] transaction","detail":"{read_only:false; response_revision:451; number_of_response:1; }","duration":"182.618699ms","start":"2024-02-29T19:49:32.968124Z","end":"2024-02-29T19:49:33.150743Z","steps":["trace[1648516708] 'process raft request'  (duration: 182.497194ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:49:35.395362Z","caller":"traceutil/trace.go:171","msg":"trace[770028143] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"235.473268ms","start":"2024-02-29T19:49:35.15987Z","end":"2024-02-29T19:49:35.395343Z","steps":["trace[770028143] 'process raft request'  (duration: 235.151756ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:49:41.84351Z","caller":"traceutil/trace.go:171","msg":"trace[1171478595] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"175.99018ms","start":"2024-02-29T19:49:41.667502Z","end":"2024-02-29T19:49:41.843492Z","steps":["trace[1171478595] 'process raft request'  (duration: 83.494864ms)","trace[1171478595] 'compare'  (duration: 92.352811ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T19:51:04.401519Z","caller":"traceutil/trace.go:171","msg":"trace[1920389334] transaction","detail":"{read_only:false; response_revision:523; number_of_response:1; }","duration":"144.352075ms","start":"2024-02-29T19:51:04.25715Z","end":"2024-02-29T19:51:04.401502Z","steps":["trace[1920389334] 'process raft request'  (duration: 144.033662ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:51:06.548779Z","caller":"traceutil/trace.go:171","msg":"trace[556685660] transaction","detail":"{read_only:false; response_revision:524; number_of_response:1; }","duration":"136.324252ms","start":"2024-02-29T19:51:06.412433Z","end":"2024-02-29T19:51:06.548758Z","steps":["trace[556685660] 'process raft request'  (duration: 135.948237ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:52:19.401616Z","caller":"traceutil/trace.go:171","msg":"trace[1308414552] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"186.023515ms","start":"2024-02-29T19:52:19.215572Z","end":"2024-02-29T19:52:19.401595Z","steps":["trace[1308414552] 'process raft request'  (duration: 185.842608ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:52:25.003687Z","caller":"traceutil/trace.go:171","msg":"trace[1789121568] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"139.361185ms","start":"2024-02-29T19:52:24.864307Z","end":"2024-02-29T19:52:25.003669Z","steps":["trace[1789121568] 'process raft request'  (duration: 138.739959ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:52:27.699899Z","caller":"traceutil/trace.go:171","msg":"trace[662379576] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"127.802708ms","start":"2024-02-29T19:52:27.57208Z","end":"2024-02-29T19:52:27.699882Z","steps":["trace[662379576] 'process raft request'  (duration: 127.492796ms)"],"step_count":1}
	{"level":"warn","ts":"2024-02-29T19:52:30.099868Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.178749ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-02-29T19:52:30.09996Z","caller":"traceutil/trace.go:171","msg":"trace[173371004] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:589; }","duration":"220.279553ms","start":"2024-02-29T19:52:29.879666Z","end":"2024-02-29T19:52:30.099945Z","steps":["trace[173371004] 'range keys from in-memory index tree'  (duration: 219.981641ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:52:30.808824Z","caller":"traceutil/trace.go:171","msg":"trace[1109491457] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"105.718494ms","start":"2024-02-29T19:52:30.703085Z","end":"2024-02-29T19:52:30.808803Z","steps":["trace[1109491457] 'process raft request'  (duration: 54.631671ms)","trace[1109491457] 'compare'  (duration: 50.985519ms)"],"step_count":2}
	{"level":"info","ts":"2024-02-29T19:52:40.208382Z","caller":"traceutil/trace.go:171","msg":"trace[1420277662] transaction","detail":"{read_only:false; response_revision:597; number_of_response:1; }","duration":"139.138006ms","start":"2024-02-29T19:52:40.069222Z","end":"2024-02-29T19:52:40.20836Z","steps":["trace[1420277662] 'process raft request'  (duration: 138.977799ms)"],"step_count":1}
	{"level":"info","ts":"2024-02-29T19:52:46.705398Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-02-29T19:52:46.705462Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-800700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.26.63.14:2380"],"advertise-client-urls":["https://172.26.63.14:2379"]}
	{"level":"warn","ts":"2024-02-29T19:52:46.705528Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T19:52:46.705609Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T19:52:46.786288Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.26.63.14:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-02-29T19:52:46.786333Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.26.63.14:2379: use of closed network connection"}
	{"level":"info","ts":"2024-02-29T19:52:46.786385Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b14c3400d736e764","current-leader-member-id":"b14c3400d736e764"}
	{"level":"info","ts":"2024-02-29T19:52:46.789177Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"172.26.63.14:2380"}
	{"level":"info","ts":"2024-02-29T19:52:46.789276Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"172.26.63.14:2380"}
	{"level":"info","ts":"2024-02-29T19:52:46.789286Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-800700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.26.63.14:2380"],"advertise-client-urls":["https://172.26.63.14:2379"]}
	
	
	==> etcd [2dc2210127d7] <==
	{"level":"info","ts":"2024-02-29T19:53:03.256611Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T19:53:03.25662Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-29T19:53:03.263441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b14c3400d736e764 switched to configuration voters=(12775643421158598500)"}
	{"level":"info","ts":"2024-02-29T19:53:03.263515Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a5a2cf80f590a6a5","local-member-id":"b14c3400d736e764","added-peer-id":"b14c3400d736e764","added-peer-peer-urls":["https://172.26.63.14:2380"]}
	{"level":"info","ts":"2024-02-29T19:53:03.263592Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a5a2cf80f590a6a5","local-member-id":"b14c3400d736e764","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:53:03.263617Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-29T19:53:03.324718Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-02-29T19:53:03.326062Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b14c3400d736e764","initial-advertise-peer-urls":["https://172.26.63.14:2380"],"listen-peer-urls":["https://172.26.63.14:2380"],"advertise-client-urls":["https://172.26.63.14:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.26.63.14:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-02-29T19:53:03.326104Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-02-29T19:53:03.326178Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.26.63.14:2380"}
	{"level":"info","ts":"2024-02-29T19:53:03.326187Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.26.63.14:2380"}
	{"level":"info","ts":"2024-02-29T19:53:04.722894Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b14c3400d736e764 is starting a new election at term 2"}
	{"level":"info","ts":"2024-02-29T19:53:04.723021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b14c3400d736e764 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-02-29T19:53:04.723069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b14c3400d736e764 received MsgPreVoteResp from b14c3400d736e764 at term 2"}
	{"level":"info","ts":"2024-02-29T19:53:04.723162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b14c3400d736e764 became candidate at term 3"}
	{"level":"info","ts":"2024-02-29T19:53:04.723187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b14c3400d736e764 received MsgVoteResp from b14c3400d736e764 at term 3"}
	{"level":"info","ts":"2024-02-29T19:53:04.723209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b14c3400d736e764 became leader at term 3"}
	{"level":"info","ts":"2024-02-29T19:53:04.723278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b14c3400d736e764 elected leader b14c3400d736e764 at term 3"}
	{"level":"info","ts":"2024-02-29T19:53:04.728071Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b14c3400d736e764","local-member-attributes":"{Name:kubernetes-upgrade-800700 ClientURLs:[https://172.26.63.14:2379]}","request-path":"/0/members/b14c3400d736e764/attributes","cluster-id":"a5a2cf80f590a6a5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-29T19:53:04.728117Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T19:53:04.728144Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-29T19:53:04.740275Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.26.63.14:2379"}
	{"level":"info","ts":"2024-02-29T19:53:04.742478Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-29T19:53:04.74055Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-29T19:53:04.770622Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:53:41 up 7 min,  0 users,  load average: 1.03, 0.87, 0.43
	Linux kubernetes-upgrade-800700 5.10.207 #1 SMP Fri Feb 23 02:44:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [36afed20177d] <==
	W0229 19:52:55.905230       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:55.917958       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.028425       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.034108       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.053244       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.061987       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.081983       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.105182       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.139408       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.143494       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.156237       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.224431       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.243082       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.334362       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.344142       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.372145       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.376675       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.384887       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.393779       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.423627       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.426435       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.432474       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.542292       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.562115       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0229 19:52:56.661402       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [6efd56ff5c87] <==
	I0229 19:53:06.527875       1 controller.go:85] Starting OpenAPI V3 controller
	I0229 19:53:06.528089       1 naming_controller.go:291] Starting NamingConditionController
	I0229 19:53:06.528279       1 establishing_controller.go:76] Starting EstablishingController
	I0229 19:53:06.528418       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0229 19:53:06.528585       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0229 19:53:06.528744       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0229 19:53:06.630042       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0229 19:53:06.630082       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0229 19:53:06.713057       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0229 19:53:06.715469       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0229 19:53:06.715682       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0229 19:53:06.715854       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0229 19:53:06.716566       1 shared_informer.go:318] Caches are synced for configmaps
	I0229 19:53:06.720624       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0229 19:53:06.728600       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0229 19:53:06.730947       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0229 19:53:06.731859       1 aggregator.go:165] initial CRD sync complete...
	I0229 19:53:06.732018       1 autoregister_controller.go:141] Starting autoregister controller
	I0229 19:53:06.732107       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0229 19:53:06.732202       1 cache.go:39] Caches are synced for autoregister controller
	I0229 19:53:06.767623       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0229 19:53:07.521965       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0229 19:53:07.863755       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.26.63.14]
	I0229 19:53:07.865588       1 controller.go:624] quota admission added evaluator for: endpoints
	I0229 19:53:07.872074       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2d8e8ea79b57] <==
	I0229 19:53:19.409935       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0229 19:53:19.410113       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0229 19:53:19.410331       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0229 19:53:19.417251       1 shared_informer.go:318] Caches are synced for daemon sets
	I0229 19:53:19.417755       1 shared_informer.go:318] Caches are synced for cronjob
	I0229 19:53:19.418392       1 shared_informer.go:318] Caches are synced for GC
	I0229 19:53:19.429006       1 shared_informer.go:318] Caches are synced for expand
	I0229 19:53:19.429362       1 shared_informer.go:318] Caches are synced for PVC protection
	I0229 19:53:19.429772       1 shared_informer.go:318] Caches are synced for TTL
	I0229 19:53:19.430032       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0229 19:53:19.430483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="298.918µs"
	I0229 19:53:19.433627       1 shared_informer.go:318] Caches are synced for stateful set
	I0229 19:53:19.438431       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0229 19:53:19.440972       1 shared_informer.go:318] Caches are synced for crt configmap
	I0229 19:53:19.442705       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0229 19:53:19.444156       1 shared_informer.go:318] Caches are synced for job
	I0229 19:53:19.461131       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 19:53:19.462016       1 shared_informer.go:318] Caches are synced for disruption
	I0229 19:53:19.466063       1 shared_informer.go:318] Caches are synced for deployment
	I0229 19:53:19.493285       1 shared_informer.go:318] Caches are synced for HPA
	I0229 19:53:19.499367       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 19:53:19.518468       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0229 19:53:19.899424       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 19:53:19.899519       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 19:53:19.983110       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [f8fd99b59c28] <==
	I0229 19:48:03.835939       1 range_allocator.go:380] "Set node PodCIDR" node="kubernetes-upgrade-800700" podCIDRs=["10.244.0.0/24"]
	I0229 19:48:03.844906       1 shared_informer.go:318] Caches are synced for persistent volume
	I0229 19:48:03.853080       1 shared_informer.go:318] Caches are synced for HPA
	I0229 19:48:03.868404       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0229 19:48:03.868901       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0229 19:48:03.923962       1 shared_informer.go:318] Caches are synced for crt configmap
	I0229 19:48:03.929411       1 shared_informer.go:318] Caches are synced for attach detach
	I0229 19:48:03.970782       1 shared_informer.go:318] Caches are synced for disruption
	I0229 19:48:03.974426       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0229 19:48:03.979841       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 19:48:03.979843       1 shared_informer.go:318] Caches are synced for resource quota
	I0229 19:48:03.991307       1 shared_informer.go:318] Caches are synced for stateful set
	I0229 19:48:04.378112       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 19:48:04.417948       1 shared_informer.go:318] Caches are synced for garbage collector
	I0229 19:48:04.417990       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0229 19:48:04.436523       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cnkv4"
	I0229 19:48:04.486012       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 1"
	I0229 19:48:04.731531       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-7swm2"
	I0229 19:48:04.755079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="269.333386ms"
	I0229 19:48:04.779891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="23.025417ms"
	I0229 19:48:04.780369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="85.506µs"
	I0229 19:48:04.783787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="75.105µs"
	I0229 19:48:06.489396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="225.008µs"
	I0229 19:48:45.077041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="14.216929ms"
	I0229 19:48:45.077150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="51.102µs"
	
	
	==> kube-proxy [b56672c0067a] <==
	I0229 19:53:05.103660       1 server_others.go:72] "Using iptables proxy"
	I0229 19:53:06.718102       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.26.63.14"]
	I0229 19:53:06.936160       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0229 19:53:06.936721       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 19:53:06.937894       1 server_others.go:168] "Using iptables Proxier"
	I0229 19:53:06.944196       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 19:53:06.944508       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0229 19:53:06.944902       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 19:53:06.946382       1 config.go:188] "Starting service config controller"
	I0229 19:53:06.946525       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 19:53:06.946601       1 config.go:97] "Starting endpoint slice config controller"
	I0229 19:53:06.946809       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 19:53:06.947536       1 config.go:315] "Starting node config controller"
	I0229 19:53:06.947715       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 19:53:07.047284       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0229 19:53:07.047502       1 shared_informer.go:318] Caches are synced for service config
	I0229 19:53:07.047948       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [f6c2b70aae89] <==
	I0229 19:48:05.888446       1 server_others.go:72] "Using iptables proxy"
	I0229 19:48:05.900748       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.26.63.14"]
	I0229 19:48:05.958444       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0229 19:48:05.958592       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0229 19:48:05.958610       1 server_others.go:168] "Using iptables Proxier"
	I0229 19:48:05.970703       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0229 19:48:05.971100       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0229 19:48:05.971132       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 19:48:05.979587       1 config.go:188] "Starting service config controller"
	I0229 19:48:05.988654       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0229 19:48:05.988704       1 config.go:97] "Starting endpoint slice config controller"
	I0229 19:48:05.988712       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0229 19:48:05.989347       1 config.go:315] "Starting node config controller"
	I0229 19:48:05.989466       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0229 19:48:06.089703       1 shared_informer.go:318] Caches are synced for node config
	I0229 19:48:06.089748       1 shared_informer.go:318] Caches are synced for service config
	I0229 19:48:06.089782       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3919587f983d] <==
	W0229 19:47:49.685140       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0229 19:47:49.685435       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0229 19:47:49.732725       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0229 19:47:49.733181       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0229 19:47:49.820640       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0229 19:47:49.820690       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0229 19:47:49.849174       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0229 19:47:49.849422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0229 19:47:49.917091       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0229 19:47:49.917347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0229 19:47:50.000800       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0229 19:47:50.001389       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0229 19:47:50.010091       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0229 19:47:50.010195       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0229 19:47:50.159036       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0229 19:47:50.159110       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0229 19:47:50.162591       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0229 19:47:50.162909       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 19:47:50.185223       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0229 19:47:50.185332       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0229 19:47:52.434766       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 19:52:46.650081       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0229 19:52:46.654526       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0229 19:52:46.654787       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0229 19:52:46.657563       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [748d5a892900] <==
	I0229 19:53:04.188277       1 serving.go:380] Generated self-signed cert in-memory
	W0229 19:53:06.639234       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0229 19:53:06.639760       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0229 19:53:06.640061       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0229 19:53:06.640297       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0229 19:53:06.719205       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0229 19:53:06.719626       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0229 19:53:06.723101       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0229 19:53:06.723291       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0229 19:53:06.724309       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0229 19:53:06.724542       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0229 19:53:06.824013       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.012307    1782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fb7953e26f41c3aa34b6119ce73e6d0649f23d64d5c17d1a8a9dd6bd8f820e4"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.012405    1782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcc182b61a7340bb035e51b3d82fa56d5644e36b629ba91a3feae93a915de2d1"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.012501    1782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f90d033552b05e949422690eb4678b18c1c49466ddebbf4a61f020b9a14ea7cb"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.019419    1782 status_manager.go:853] "Failed to get status for pod" podUID="e753565cd643197ece8fd719ad83a7df" pod="kube-system/kube-scheduler-kubernetes-upgrade-800700" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes-upgrade-800700\": dial tcp 172.26.63.14:8443: connect: connection refused"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.023207    1782 status_manager.go:853] "Failed to get status for pod" podUID="619c292f-b331-46e2-9087-d76e87ed8a3f" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 172.26.63.14:8443: connect: connection refused"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.024967    1782 status_manager.go:853] "Failed to get status for pod" podUID="13ead83c-ddde-43e5-a07a-ec53a0487e94" pod="kube-system/kube-proxy-cnkv4" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cnkv4\": dial tcp 172.26.63.14:8443: connect: connection refused"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.032653    1782 status_manager.go:853] "Failed to get status for pod" podUID="6a1a06d4-9c73-4647-8cf7-5c11eefc782b" pod="kube-system/coredns-76f75df574-7swm2" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-7swm2\": dial tcp 172.26.63.14:8443: connect: connection refused"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.035251    1782 status_manager.go:853] "Failed to get status for pod" podUID="754a0a4343b6ff92af3e1b7e07531203" pod="kube-system/etcd-kubernetes-upgrade-800700" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-kubernetes-upgrade-800700\": dial tcp 172.26.63.14:8443: connect: connection refused"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.036665    1782 status_manager.go:853] "Failed to get status for pod" podUID="bafa0ff76f46e9bf524a9ac038c61ac8" pod="kube-system/kube-apiserver-kubernetes-upgrade-800700" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-800700\": dial tcp 172.26.63.14:8443: connect: connection refused"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.037408    1782 status_manager.go:853] "Failed to get status for pod" podUID="e753565cd643197ece8fd719ad83a7df" pod="kube-system/kube-scheduler-kubernetes-upgrade-800700" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-kubernetes-upgrade-800700\": dial tcp 172.26.63.14:8443: connect: connection refused"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.038423    1782 status_manager.go:853] "Failed to get status for pod" podUID="619c292f-b331-46e2-9087-d76e87ed8a3f" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 172.26.63.14:8443: connect: connection refused"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.039533    1782 status_manager.go:853] "Failed to get status for pod" podUID="13ead83c-ddde-43e5-a07a-ec53a0487e94" pod="kube-system/kube-proxy-cnkv4" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cnkv4\": dial tcp 172.26.63.14:8443: connect: connection refused"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.040985    1782 status_manager.go:853] "Failed to get status for pod" podUID="6a1a06d4-9c73-4647-8cf7-5c11eefc782b" pod="kube-system/coredns-76f75df574-7swm2" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-7swm2\": dial tcp 172.26.63.14:8443: connect: connection refused"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.042808    1782 status_manager.go:853] "Failed to get status for pod" podUID="754a0a4343b6ff92af3e1b7e07531203" pod="kube-system/etcd-kubernetes-upgrade-800700" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-kubernetes-upgrade-800700\": dial tcp 172.26.63.14:8443: connect: connection refused"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.045534    1782 status_manager.go:853] "Failed to get status for pod" podUID="bafa0ff76f46e9bf524a9ac038c61ac8" pod="kube-system/kube-apiserver-kubernetes-upgrade-800700" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-800700\": dial tcp 172.26.63.14:8443: connect: connection refused"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.063308    1782 status_manager.go:853] "Failed to get status for pod" podUID="99668c096a7328d2a7d22f2903fdfa33" pod="kube-system/kube-controller-manager-kubernetes-upgrade-800700" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-kubernetes-upgrade-800700\": dial tcp 172.26.63.14:8443: connect: connection refused"
	Feb 29 19:53:02 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:02.843705    1782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9675365c0609c8fa9220d40676161fac429eaafc49bf59638689ca75f8e2d0a3"
	Feb 29 19:53:03 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:03.498425    1782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71b5b61817dfede72a16328b4f258c84c97e3fedd15c141a75e624bf44ca4c45"
	Feb 29 19:53:03 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:03.524445    1782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7932a798d86da6ada6e2e268ebc2a57b148ebdd73cd39e76c4e75237405a7837"
	Feb 29 19:53:03 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:03.822173    1782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac9f93c84106b9e6f6d218f50bf071af234daf6f0e682ffcee752ec973d3c81f"
	Feb 29 19:53:04 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:04.531784    1782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bba973fcb7ac8a4e83a1fdb7bbd622d1c07752bf16a83eb112bd6b2c7cbd1ab"
	Feb 29 19:53:04 kubernetes-upgrade-800700 kubelet[1782]: I0229 19:53:04.992756    1782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32f5edbc40cd3772e7b964661e00a0c7dce465412bd41b482c7ba0b60d6d9093"
	Feb 29 19:53:06 kubernetes-upgrade-800700 kubelet[1782]: E0229 19:53:06.631176    1782 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 29 19:53:06 kubernetes-upgrade-800700 kubelet[1782]: E0229 19:53:06.631809    1782 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Feb 29 19:53:06 kubernetes-upgrade-800700 kubelet[1782]: E0229 19:53:06.633061    1782 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	
	==> storage-provisioner [9ed70e11e673] <==
	I0229 19:53:04.970536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 19:53:06.765905       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 19:53:06.771017       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 19:53:24.206097       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 19:53:24.206937       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-800700_f19e0e88-6a09-4656-8b24-2d0413274c42!
	I0229 19:53:24.206558       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f9602a11-f9db-4270-bac9-9bb8441c3e8e", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-800700_f19e0e88-6a09-4656-8b24-2d0413274c42 became leader
	I0229 19:53:24.308743       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-800700_f19e0e88-6a09-4656-8b24-2d0413274c42!
	
	
	==> storage-provisioner [a16e3f52e157] <==
	I0229 19:48:06.367338       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0229 19:48:06.382393       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0229 19:48:06.382882       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0229 19:48:06.396833       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0229 19:48:06.397069       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-800700_12909cba-d129-4800-8e5f-439e94fc6da3!
	I0229 19:48:06.397110       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f9602a11-f9db-4270-bac9-9bb8441c3e8e", APIVersion:"v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-800700_12909cba-d129-4800-8e5f-439e94fc6da3 became leader
	I0229 19:48:06.497914       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-800700_12909cba-d129-4800-8e5f-439e94fc6da3!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:53:33.028686    5372 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-800700 -n kubernetes-upgrade-800700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-800700 -n kubernetes-upgrade-800700: (11.9829028s)
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-800700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-800700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-800700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-800700: (47.5326013s)
--- FAIL: TestKubernetesUpgrade (1402.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (312.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-737000 --driver=hyperv
E0229 19:55:06.616137    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 19:55:23.394291    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 19:55:32.068894    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-737000 --driver=hyperv: exit status 1 (4m59.8325927s)

                                                
                                                
-- stdout --
	* [NoKubernetes-737000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node NoKubernetes-737000 in cluster NoKubernetes-737000
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:54:42.047019    6740 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-737000 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-737000 -n NoKubernetes-737000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-737000 -n NoKubernetes-737000: exit status 6 (12.2736134s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:59:41.931314   11032 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0229 19:59:54.021477   11032 status.go:410] forwarded endpoint: failed to lookup ip for ""

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-737000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (312.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (371.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperv
net_test.go:112: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kindnet-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperv: exit status 90 (6m11.7038915s)

                                                
                                                
-- stdout --
	* [kindnet-863900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node kindnet-863900 in cluster kindnet-863900
	* Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:59:46.661754   14320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 19:59:46.708769   14320 out.go:291] Setting OutFile to fd 900 ...
	I0229 19:59:46.709769   14320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:59:46.709769   14320 out.go:304] Setting ErrFile to fd 1040...
	I0229 19:59:46.709769   14320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:59:46.730762   14320 out.go:298] Setting JSON to false
	I0229 19:59:46.733761   14320 start.go:129] hostinfo: {"hostname":"minikube5","uptime":58523,"bootTime":1709178263,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 19:59:46.734763   14320 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 19:59:46.735761   14320 out.go:177] * [kindnet-863900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 19:59:46.735761   14320 notify.go:220] Checking for updates...
	I0229 19:59:46.736764   14320 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 19:59:46.737761   14320 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 19:59:46.737761   14320 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 19:59:46.738762   14320 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 19:59:46.738762   14320 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 19:59:46.740768   14320 config.go:182] Loaded profile config "NoKubernetes-737000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:59:46.740768   14320 config.go:182] Loaded profile config "auto-863900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:59:46.740768   14320 config.go:182] Loaded profile config "pause-027300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:59:46.741771   14320 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 19:59:52.090304   14320 out.go:177] * Using the hyperv driver based on user configuration
	I0229 19:59:52.091001   14320 start.go:299] selected driver: hyperv
	I0229 19:59:52.091001   14320 start.go:903] validating driver "hyperv" against <nil>
	I0229 19:59:52.091001   14320 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 19:59:52.133930   14320 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 19:59:52.135199   14320 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 19:59:52.135199   14320 cni.go:84] Creating CNI manager for "kindnet"
	I0229 19:59:52.135199   14320 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I0229 19:59:52.135199   14320 start_flags.go:323] config:
	{Name:kindnet-863900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-863900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 19:59:52.135940   14320 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 19:59:52.136943   14320 out.go:177] * Starting control plane node kindnet-863900 in cluster kindnet-863900
	I0229 19:59:52.137942   14320 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 19:59:52.137942   14320 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 19:59:52.137942   14320 cache.go:56] Caching tarball of preloaded images
	I0229 19:59:52.137942   14320 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 19:59:52.137942   14320 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 19:59:52.138947   14320 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-863900\config.json ...
	I0229 19:59:52.138947   14320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-863900\config.json: {Name:mkda732c5f352f2e525032e7075f3af87ecaa556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 19:59:52.139940   14320 start.go:365] acquiring machines lock for kindnet-863900: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 20:02:40.641452   14320 start.go:369] acquired machines lock for "kindnet-863900" in 2m48.4920974s
	I0229 20:02:40.641775   14320 start.go:93] Provisioning new machine with config: &{Name:kindnet-863900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-863900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 20:02:40.641977   14320 start.go:125] createHost starting for "" (driver="hyperv")
	I0229 20:02:40.672491   14320 out.go:204] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0229 20:02:40.672885   14320 start.go:159] libmachine.API.Create for "kindnet-863900" (driver="hyperv")
	I0229 20:02:40.673127   14320 client.go:168] LocalClient.Create starting
	I0229 20:02:40.673459   14320 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 20:02:40.673459   14320 main.go:141] libmachine: Decoding PEM data...
	I0229 20:02:40.673459   14320 main.go:141] libmachine: Parsing certificate...
	I0229 20:02:40.673459   14320 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 20:02:40.674126   14320 main.go:141] libmachine: Decoding PEM data...
	I0229 20:02:40.674126   14320 main.go:141] libmachine: Parsing certificate...
	I0229 20:02:40.674252   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 20:02:42.497434   14320 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 20:02:42.497434   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:02:42.497434   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 20:02:44.245334   14320 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 20:02:44.245334   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:02:44.245521   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 20:02:45.698953   14320 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 20:02:45.698953   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:02:45.699168   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 20:02:49.502107   14320 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 20:02:49.503018   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:02:49.504992   14320 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 20:02:49.839044   14320 main.go:141] libmachine: Creating SSH key...
	I0229 20:02:50.019645   14320 main.go:141] libmachine: Creating VM...
	I0229 20:02:50.019645   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 20:02:52.924308   14320 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 20:02:52.924308   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:02:52.924397   14320 main.go:141] libmachine: Using switch "Default Switch"
	I0229 20:02:52.924550   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 20:02:54.627969   14320 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 20:02:54.628334   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:02:54.628334   14320 main.go:141] libmachine: Creating VHD
	I0229 20:02:54.628427   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-863900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 20:02:58.616159   14320 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-863900\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 287EF437-4032-435E-B02E-6AB69386F53E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 20:02:58.616159   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:02:58.616238   14320 main.go:141] libmachine: Writing magic tar header
	I0229 20:02:58.616238   14320 main.go:141] libmachine: Writing SSH key tar header
	I0229 20:02:58.625413   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-863900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-863900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 20:03:01.922593   14320 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:03:01.922593   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:01.922593   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-863900\disk.vhd' -SizeBytes 20000MB
	I0229 20:03:04.462395   14320 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:03:04.462395   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:04.462587   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM kindnet-863900 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-863900' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0229 20:03:09.414106   14320 main.go:141] libmachine: [stdout =====>] : 
	Name           State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----           ----- ----------- ----------------- ------   ------             -------
	kindnet-863900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 20:03:09.414106   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:09.414432   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName kindnet-863900 -DynamicMemoryEnabled $false
	I0229 20:03:11.556851   14320 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:03:11.556851   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:11.557548   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor kindnet-863900 -Count 2
	I0229 20:03:13.604454   14320 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:03:13.604454   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:13.604702   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName kindnet-863900 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-863900\boot2docker.iso'
	I0229 20:03:15.999835   14320 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:03:15.999835   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:15.999835   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName kindnet-863900 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-863900\disk.vhd'
	I0229 20:03:18.456915   14320 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:03:18.456915   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:18.456915   14320 main.go:141] libmachine: Starting VM...
	I0229 20:03:18.457919   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kindnet-863900
	I0229 20:03:21.148275   14320 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:03:21.148275   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:21.148275   14320 main.go:141] libmachine: Waiting for host to start...
	I0229 20:03:21.148479   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:03:23.260920   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:03:23.260920   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:23.260920   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:03:25.625796   14320 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:03:25.625796   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:26.630528   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:03:28.664499   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:03:28.664499   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:28.664722   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:03:31.009362   14320 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:03:31.009362   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:32.014938   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:03:34.034262   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:03:34.034262   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:34.034262   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:03:36.396088   14320 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:03:36.396369   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:37.397011   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:03:39.744438   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:03:39.744438   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:39.744519   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:03:42.301462   14320 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:03:42.301462   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:43.316419   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:03:45.527914   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:03:45.527914   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:45.528017   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:03:47.997699   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:03:47.997699   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:47.997699   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:03:49.993395   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:03:49.993873   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:49.993873   14320 machine.go:88] provisioning docker machine ...
	I0229 20:03:49.993962   14320 buildroot.go:166] provisioning hostname "kindnet-863900"
	I0229 20:03:49.994041   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:03:52.001302   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:03:52.001302   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:52.001302   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:03:54.379563   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:03:54.379563   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:54.384766   14320 main.go:141] libmachine: Using SSH client type: native
	I0229 20:03:54.385435   14320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.48.88 22 <nil> <nil>}
	I0229 20:03:54.385435   14320 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-863900 && echo "kindnet-863900" | sudo tee /etc/hostname
	I0229 20:03:54.544225   14320 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-863900
	
	I0229 20:03:54.544225   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:03:56.534469   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:03:56.534469   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:56.534469   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:03:58.901725   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:03:58.901725   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:03:58.909549   14320 main.go:141] libmachine: Using SSH client type: native
	I0229 20:03:58.910215   14320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.48.88 22 <nil> <nil>}
	I0229 20:03:58.910215   14320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-863900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-863900/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-863900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 20:03:59.065193   14320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 20:03:59.065193   14320 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 20:03:59.065193   14320 buildroot.go:174] setting up certificates
	I0229 20:03:59.065775   14320 provision.go:83] configureAuth start
	I0229 20:03:59.065864   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:04:01.053409   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:04:01.054299   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:01.054360   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:04:03.435185   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:04:03.435185   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:03.435185   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:04:05.416831   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:04:05.417328   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:05.417388   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:04:07.788742   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:04:07.788947   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:07.788947   14320 provision.go:138] copyHostCerts
	I0229 20:04:07.789263   14320 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 20:04:07.789336   14320 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 20:04:07.789652   14320 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 20:04:07.790565   14320 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 20:04:07.790565   14320 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 20:04:07.790930   14320 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 20:04:07.791767   14320 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 20:04:07.791767   14320 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 20:04:07.791767   14320 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 20:04:07.792739   14320 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kindnet-863900 san=[172.26.48.88 172.26.48.88 localhost 127.0.0.1 minikube kindnet-863900]
	I0229 20:04:08.166473   14320 provision.go:172] copyRemoteCerts
	I0229 20:04:08.175118   14320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 20:04:08.175118   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:04:10.149576   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:04:10.149693   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:10.149693   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:04:12.525582   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:04:12.525582   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:12.525898   14320 sshutil.go:53] new ssh client: &{IP:172.26.48.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-863900\id_rsa Username:docker}
	I0229 20:04:12.635624   14320 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4602585s)
	I0229 20:04:12.635867   14320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 20:04:12.686286   14320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0229 20:04:12.749102   14320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0229 20:04:12.796705   14320 provision.go:86] duration metric: configureAuth took 13.7301673s
	I0229 20:04:12.796773   14320 buildroot.go:189] setting minikube options for container-runtime
	I0229 20:04:12.797326   14320 config.go:182] Loaded profile config "kindnet-863900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 20:04:12.797436   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:04:14.781697   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:04:14.781697   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:14.781697   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:04:17.184681   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:04:17.184681   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:17.188607   14320 main.go:141] libmachine: Using SSH client type: native
	I0229 20:04:17.188734   14320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.48.88 22 <nil> <nil>}
	I0229 20:04:17.188734   14320 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 20:04:17.321143   14320 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 20:04:17.321143   14320 buildroot.go:70] root file system type: tmpfs
	I0229 20:04:17.321143   14320 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 20:04:17.321143   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:04:19.346647   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:04:19.346842   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:19.346842   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:04:21.845627   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:04:21.845724   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:21.851424   14320 main.go:141] libmachine: Using SSH client type: native
	I0229 20:04:21.852147   14320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.48.88 22 <nil> <nil>}
	I0229 20:04:21.852147   14320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 20:04:22.012781   14320 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 20:04:22.013344   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:04:24.062929   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:04:24.062929   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:24.062929   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:04:26.500275   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:04:26.501007   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:26.506537   14320 main.go:141] libmachine: Using SSH client type: native
	I0229 20:04:26.507082   14320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.48.88 22 <nil> <nil>}
	I0229 20:04:26.507206   14320 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 20:04:27.583359   14320 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 20:04:27.583891   14320 machine.go:91] provisioned docker machine in 37.5879278s
	I0229 20:04:27.583937   14320 client.go:171] LocalClient.Create took 1m46.9048664s
	I0229 20:04:27.584059   14320 start.go:167] duration metric: libmachine.API.Create for "kindnet-863900" took 1m46.9052311s
	I0229 20:04:27.584129   14320 start.go:300] post-start starting for "kindnet-863900" (driver="hyperv")
	I0229 20:04:27.584155   14320 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 20:04:27.593522   14320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 20:04:27.593522   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:04:29.646418   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:04:29.646418   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:29.646418   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:04:32.066676   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:04:32.066676   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:32.067668   14320 sshutil.go:53] new ssh client: &{IP:172.26.48.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-863900\id_rsa Username:docker}
	I0229 20:04:32.168311   14320 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5745354s)
	I0229 20:04:32.177500   14320 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 20:04:32.185117   14320 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 20:04:32.185117   14320 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 20:04:32.185117   14320 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 20:04:32.186753   14320 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> 43562.pem in /etc/ssl/certs
	I0229 20:04:32.196225   14320 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 20:04:32.214976   14320 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /etc/ssl/certs/43562.pem (1708 bytes)
	I0229 20:04:32.262737   14320 start.go:303] post-start completed in 4.6783226s
	I0229 20:04:32.265938   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:04:34.260310   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:04:34.260310   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:34.260310   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:04:36.693483   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:04:36.693483   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:36.693483   14320 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\kindnet-863900\config.json ...
	I0229 20:04:36.695758   14320 start.go:128] duration metric: createHost completed in 1m56.0472841s
	I0229 20:04:36.695830   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:04:38.706572   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:04:38.706572   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:38.706656   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:04:41.091849   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:04:41.091849   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:41.096547   14320 main.go:141] libmachine: Using SSH client type: native
	I0229 20:04:41.097144   14320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.48.88 22 <nil> <nil>}
	I0229 20:04:41.097144   14320 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 20:04:41.228638   14320 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709237081.396897006
	
	I0229 20:04:41.228638   14320 fix.go:206] guest clock: 1709237081.396897006
	I0229 20:04:41.228638   14320 fix.go:219] Guest: 2024-02-29 20:04:41.396897006 +0000 UTC Remote: 2024-02-29 20:04:36.6958304 +0000 UTC m=+290.111521801 (delta=4.701066606s)
	I0229 20:04:41.228638   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:04:43.211107   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:04:43.211107   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:43.211107   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:04:45.614028   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:04:45.614028   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:45.618791   14320 main.go:141] libmachine: Using SSH client type: native
	I0229 20:04:45.619074   14320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.48.88 22 <nil> <nil>}
	I0229 20:04:45.619074   14320 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709237081
	I0229 20:04:45.760271   14320 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 20:04:41 UTC 2024
	
	I0229 20:04:45.760334   14320 fix.go:226] clock set: Thu Feb 29 20:04:41 UTC 2024
	 (err=<nil>)
	I0229 20:04:45.760334   14320 start.go:83] releasing machines lock for "kindnet-863900", held for 2m5.1118167s
	I0229 20:04:45.760506   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:04:47.811803   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:04:47.811918   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:47.811918   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:04:50.286886   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:04:50.286886   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:50.291286   14320 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 20:04:50.291506   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:04:50.299454   14320 ssh_runner.go:195] Run: cat /version.json
	I0229 20:04:50.300441   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kindnet-863900 ).state
	I0229 20:04:52.590218   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:04:52.590218   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:52.590218   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:04:52.592210   14320 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:04:52.592210   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:52.592210   14320 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kindnet-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:04:55.168927   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:04:55.168927   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:55.168927   14320 sshutil.go:53] new ssh client: &{IP:172.26.48.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-863900\id_rsa Username:docker}
	I0229 20:04:55.190289   14320 main.go:141] libmachine: [stdout =====>] : 172.26.48.88
	
	I0229 20:04:55.190289   14320 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:55.190289   14320 sshutil.go:53] new ssh client: &{IP:172.26.48.88 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\kindnet-863900\id_rsa Username:docker}
	I0229 20:04:55.257697   14320 ssh_runner.go:235] Completed: cat /version.json: (4.9579676s)
	I0229 20:04:55.273140   14320 ssh_runner.go:195] Run: systemctl --version
	I0229 20:04:55.342135   14320 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0504812s)
	I0229 20:04:55.352579   14320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 20:04:55.363086   14320 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 20:04:55.377390   14320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 20:04:55.415651   14320 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 20:04:55.415752   14320 start.go:475] detecting cgroup driver to use...
	I0229 20:04:55.416006   14320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 20:04:55.472994   14320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 20:04:55.502995   14320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 20:04:55.523724   14320 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 20:04:55.532605   14320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 20:04:55.562188   14320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 20:04:55.590452   14320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 20:04:55.623913   14320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 20:04:55.652750   14320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 20:04:55.681449   14320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 20:04:55.710783   14320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 20:04:55.750094   14320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 20:04:55.785251   14320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 20:04:55.997413   14320 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 20:04:56.029256   14320 start.go:475] detecting cgroup driver to use...
	I0229 20:04:56.043253   14320 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 20:04:56.085395   14320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 20:04:56.118363   14320 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 20:04:56.163512   14320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 20:04:56.199598   14320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 20:04:56.234106   14320 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 20:04:56.285227   14320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 20:04:56.309368   14320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 20:04:56.354388   14320 ssh_runner.go:195] Run: which cri-dockerd
	I0229 20:04:56.374421   14320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 20:04:56.398310   14320 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 20:04:56.443749   14320 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 20:04:56.641367   14320 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 20:04:56.829503   14320 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 20:04:56.829726   14320 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 20:04:56.872626   14320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 20:04:57.068206   14320 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 20:05:58.178700   14320 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1069579s)
	I0229 20:05:58.187685   14320 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0229 20:05:58.223880   14320 out.go:177] 
	W0229 20:05:58.224511   14320 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 20:04:27 kindnet-863900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.278927957Z" level=info msg="Starting up"
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.279797262Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.281050557Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=652
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.310426673Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.338571599Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.339143234Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.339401695Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.339464610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.339674059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.339752077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.341011274Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.341681031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.341926589Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.342143040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.342385397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.342742181Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.345663869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.345759492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.346275713Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.346375437Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.346487363Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.346539075Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.346553479Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.360220396Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.360378934Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.360403039Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.360470655Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.360643596Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.360770126Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361175921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361312653Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361417178Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361439183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361455687Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361476592Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361490895Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361515601Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361533906Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361549509Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361563312Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361576316Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361597721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361614825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361629028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361643531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361665036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361680940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361694843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361709747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361724150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361741754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361755658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361770161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361783864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361800868Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361822974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361837377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361850480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361910094Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.362181158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.362198962Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.362211565Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.362309988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.362408011Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.362423215Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.362876022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.363201198Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.363418149Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.363539078Z" level=info msg="containerd successfully booted in 0.054330s"
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.395471696Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.416187273Z" level=info msg="Loading containers: start."
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.665156473Z" level=info msg="Loading containers: done."
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.681999363Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.682116191Z" level=info msg="Daemon has completed initialization"
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.749754315Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.749843236Z" level=info msg="API listen on [::]:2376"
	Feb 29 20:04:27 kindnet-863900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 20:04:57 kindnet-863900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 20:04:57 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:57.263854038Z" level=info msg="Processing signal 'terminated'"
	Feb 29 20:04:57 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:57.265556706Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 20:04:57 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:57.265686611Z" level=info msg="Daemon shutdown complete"
	Feb 29 20:04:57 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:57.265852918Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 20:04:57 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:57.266057726Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 20:04:58 kindnet-863900 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 20:04:58 kindnet-863900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 20:04:58 kindnet-863900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 20:04:58 kindnet-863900 dockerd[991]: time="2024-02-29T20:04:58.338259352Z" level=info msg="Starting up"
	Feb 29 20:05:58 kindnet-863900 dockerd[991]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 20:05:58 kindnet-863900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 20:05:58 kindnet-863900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 20:05:58 kindnet-863900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 20:04:27 kindnet-863900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.278927957Z" level=info msg="Starting up"
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.279797262Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.281050557Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=652
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.310426673Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.338571599Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.339143234Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.339401695Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.339464610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.339674059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.339752077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.341011274Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.341681031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.341926589Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.342143040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.342385397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.342742181Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.345663869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.345759492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.346275713Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.346375437Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.346487363Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.346539075Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.346553479Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.360220396Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.360378934Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.360403039Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.360470655Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.360643596Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.360770126Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361175921Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361312653Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361417178Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361439183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361455687Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361476592Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361490895Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361515601Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361533906Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361549509Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361563312Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361576316Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361597721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361614825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361629028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361643531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361665036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361680940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361694843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361709747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361724150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361741754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361755658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361770161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361783864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361800868Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361822974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361837377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361850480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.361910094Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.362181158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.362198962Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.362211565Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.362309988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.362408011Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.362423215Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.362876022Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.363201198Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.363418149Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 20:04:27 kindnet-863900 dockerd[652]: time="2024-02-29T20:04:27.363539078Z" level=info msg="containerd successfully booted in 0.054330s"
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.395471696Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.416187273Z" level=info msg="Loading containers: start."
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.665156473Z" level=info msg="Loading containers: done."
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.681999363Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.682116191Z" level=info msg="Daemon has completed initialization"
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.749754315Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 20:04:27 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:27.749843236Z" level=info msg="API listen on [::]:2376"
	Feb 29 20:04:27 kindnet-863900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 20:04:57 kindnet-863900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 20:04:57 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:57.263854038Z" level=info msg="Processing signal 'terminated'"
	Feb 29 20:04:57 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:57.265556706Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 20:04:57 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:57.265686611Z" level=info msg="Daemon shutdown complete"
	Feb 29 20:04:57 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:57.265852918Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 20:04:57 kindnet-863900 dockerd[646]: time="2024-02-29T20:04:57.266057726Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 20:04:58 kindnet-863900 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 20:04:58 kindnet-863900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 20:04:58 kindnet-863900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 20:04:58 kindnet-863900 dockerd[991]: time="2024-02-29T20:04:58.338259352Z" level=info msg="Starting up"
	Feb 29 20:05:58 kindnet-863900 dockerd[991]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 20:05:58 kindnet-863900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 20:05:58 kindnet-863900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 20:05:58 kindnet-863900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0229 20:05:58.225109   14320 out.go:239] * 
	* 
	W0229 20:05:58.226313   14320 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 20:05:58.227494   14320 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/kindnet/Start (371.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (469.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperv
E0229 20:00:23.403060    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 20:00:32.091429    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p calico-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperv: exit status 90 (7m49.5069414s)

                                                
                                                
-- stdout --
	* [calico-863900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node calico-863900 in cluster calico-863900
	* Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 20:00:16.136052   10756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 20:00:16.189585   10756 out.go:291] Setting OutFile to fd 1344 ...
	I0229 20:00:16.190055   10756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 20:00:16.190055   10756 out.go:304] Setting ErrFile to fd 776...
	I0229 20:00:16.190055   10756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 20:00:16.214148   10756 out.go:298] Setting JSON to false
	I0229 20:00:16.218394   10756 start.go:129] hostinfo: {"hostname":"minikube5","uptime":58553,"bootTime":1709178263,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 20:00:16.218394   10756 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 20:00:16.219056   10756 out.go:177] * [calico-863900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 20:00:16.219742   10756 notify.go:220] Checking for updates...
	I0229 20:00:16.220460   10756 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 20:00:16.220460   10756 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 20:00:16.220460   10756 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 20:00:16.222183   10756 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 20:00:16.222850   10756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 20:00:16.224438   10756 config.go:182] Loaded profile config "auto-863900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 20:00:16.225116   10756 config.go:182] Loaded profile config "kindnet-863900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 20:00:16.225116   10756 config.go:182] Loaded profile config "pause-027300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 20:00:16.225116   10756 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 20:00:21.476100   10756 out.go:177] * Using the hyperv driver based on user configuration
	I0229 20:00:21.477219   10756 start.go:299] selected driver: hyperv
	I0229 20:00:21.477219   10756 start.go:903] validating driver "hyperv" against <nil>
	I0229 20:00:21.477282   10756 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 20:00:21.535004   10756 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 20:00:21.535979   10756 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 20:00:21.535979   10756 cni.go:84] Creating CNI manager for "calico"
	I0229 20:00:21.535979   10756 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I0229 20:00:21.535979   10756 start_flags.go:323] config:
	{Name:calico-863900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-863900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 20:00:21.536974   10756 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 20:00:21.537979   10756 out.go:177] * Starting control plane node calico-863900 in cluster calico-863900
	I0229 20:00:21.537979   10756 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 20:00:21.538980   10756 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 20:00:21.538980   10756 cache.go:56] Caching tarball of preloaded images
	I0229 20:00:21.538980   10756 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 20:00:21.538980   10756 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 20:00:21.538980   10756 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\calico-863900\config.json ...
	I0229 20:00:21.539979   10756 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\calico-863900\config.json: {Name:mka400e69d8088c9280a329781b961d90d9bd293 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 20:00:21.540974   10756 start.go:365] acquiring machines lock for calico-863900: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 20:04:45.760506   10756 start.go:369] acquired machines lock for "calico-863900" in 4m24.2048563s
	I0229 20:04:45.760506   10756 start.go:93] Provisioning new machine with config: &{Name:calico-863900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:calico-863900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 20:04:45.761047   10756 start.go:125] createHost starting for "" (driver="hyperv")
	I0229 20:04:45.761849   10756 out.go:204] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0229 20:04:45.762188   10756 start.go:159] libmachine.API.Create for "calico-863900" (driver="hyperv")
	I0229 20:04:45.762273   10756 client.go:168] LocalClient.Create starting
	I0229 20:04:45.762737   10756 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 20:04:45.762918   10756 main.go:141] libmachine: Decoding PEM data...
	I0229 20:04:45.762950   10756 main.go:141] libmachine: Parsing certificate...
	I0229 20:04:45.763088   10756 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 20:04:45.763246   10756 main.go:141] libmachine: Decoding PEM data...
	I0229 20:04:45.763246   10756 main.go:141] libmachine: Parsing certificate...
	I0229 20:04:45.763352   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 20:04:47.622306   10756 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 20:04:47.622456   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:47.622456   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 20:04:49.296006   10756 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 20:04:49.296102   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:49.296102   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 20:04:50.784664   10756 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 20:04:50.784664   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:50.784664   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 20:04:54.512052   10756 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 20:04:54.512052   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:54.514601   10756 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 20:04:54.828884   10756 main.go:141] libmachine: Creating SSH key...
	I0229 20:04:55.333863   10756 main.go:141] libmachine: Creating VM...
	I0229 20:04:55.333863   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 20:04:58.188688   10756 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 20:04:58.188780   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:58.188780   10756 main.go:141] libmachine: Using switch "Default Switch"
	I0229 20:04:58.188780   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 20:04:59.877781   10756 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 20:04:59.877781   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:04:59.877781   10756 main.go:141] libmachine: Creating VHD
	I0229 20:04:59.877781   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-863900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 20:05:03.550352   10756 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-863900\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C97DB1C2-5D9C-47CB-A069-263F10D68804
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 20:05:03.550352   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:03.551193   10756 main.go:141] libmachine: Writing magic tar header
	I0229 20:05:03.551193   10756 main.go:141] libmachine: Writing SSH key tar header
	I0229 20:05:03.559939   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-863900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-863900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 20:05:06.699488   10756 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:05:06.699488   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:06.699488   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-863900\disk.vhd' -SizeBytes 20000MB
	I0229 20:05:09.129003   10756 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:05:09.129003   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:09.129003   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM calico-863900 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-863900' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0229 20:05:12.485617   10756 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	calico-863900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 20:05:12.485617   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:12.485617   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName calico-863900 -DynamicMemoryEnabled $false
	I0229 20:05:14.671804   10756 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:05:14.671804   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:14.671804   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor calico-863900 -Count 2
	I0229 20:05:16.780970   10756 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:05:16.780970   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:16.781244   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName calico-863900 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-863900\boot2docker.iso'
	I0229 20:05:19.211930   10756 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:05:19.212659   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:19.212659   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName calico-863900 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-863900\disk.vhd'
	I0229 20:05:21.737082   10756 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:05:21.737515   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:21.737515   10756 main.go:141] libmachine: Starting VM...
	I0229 20:05:21.737515   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM calico-863900
	I0229 20:05:24.513430   10756 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:05:24.513568   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:24.513568   10756 main.go:141] libmachine: Waiting for host to start...
	I0229 20:05:24.513568   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:05:26.745169   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:05:26.745605   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:26.745728   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:05:29.159170   10756 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:05:29.159373   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:30.169837   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:05:32.351510   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:05:32.351510   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:32.351510   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:05:34.798606   10756 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:05:34.798724   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:35.801778   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:05:37.806918   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:05:37.807082   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:37.807159   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:05:40.243325   10756 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:05:40.243325   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:41.255650   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:05:43.410203   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:05:43.410321   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:43.410321   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:05:45.809833   10756 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:05:45.810435   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:46.818871   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:05:48.881080   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:05:48.881080   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:48.881080   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:05:51.529043   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:05:51.529043   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:51.529190   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:05:53.583737   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:05:53.583737   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:53.583737   10756 machine.go:88] provisioning docker machine ...
	I0229 20:05:53.583737   10756 buildroot.go:166] provisioning hostname "calico-863900"
	I0229 20:05:53.583737   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:05:55.633409   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:05:55.634040   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:55.634040   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:05:58.045190   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:05:58.045190   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:05:58.049798   10756 main.go:141] libmachine: Using SSH client type: native
	I0229 20:05:58.058602   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.60.231 22 <nil> <nil>}
	I0229 20:05:58.058602   10756 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-863900 && echo "calico-863900" | sudo tee /etc/hostname
	I0229 20:05:58.223880   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-863900
	
	I0229 20:05:58.223880   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:06:00.431008   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:06:00.431008   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:00.431179   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:06:02.968843   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:06:02.968843   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:02.972182   10756 main.go:141] libmachine: Using SSH client type: native
	I0229 20:06:02.972960   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.60.231 22 <nil> <nil>}
	I0229 20:06:02.972960   10756 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-863900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-863900/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-863900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 20:06:03.136488   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 20:06:03.136488   10756 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 20:06:03.137026   10756 buildroot.go:174] setting up certificates
	I0229 20:06:03.137069   10756 provision.go:83] configureAuth start
	I0229 20:06:03.137117   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:06:05.249212   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:06:05.249212   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:05.250135   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:06:07.684660   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:06:07.685514   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:07.685514   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:06:09.802231   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:06:09.802231   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:09.802231   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:06:12.371838   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:06:12.371932   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:12.371932   10756 provision.go:138] copyHostCerts
	I0229 20:06:12.372420   10756 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 20:06:12.372420   10756 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 20:06:12.372982   10756 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 20:06:12.374422   10756 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 20:06:12.374468   10756 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 20:06:12.374588   10756 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 20:06:12.376293   10756 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 20:06:12.376368   10756 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 20:06:12.376737   10756 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 20:06:12.377967   10756 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.calico-863900 san=[172.26.60.231 172.26.60.231 localhost 127.0.0.1 minikube calico-863900]
	I0229 20:06:12.640538   10756 provision.go:172] copyRemoteCerts
	I0229 20:06:12.649264   10756 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 20:06:12.649396   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:06:14.716823   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:06:14.717131   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:14.717279   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:06:17.161509   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:06:17.162503   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:17.163026   10756 sshutil.go:53] new ssh client: &{IP:172.26.60.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-863900\id_rsa Username:docker}
	I0229 20:06:17.273775   10756 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6241219s)
	I0229 20:06:17.273939   10756 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 20:06:17.321399   10756 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 20:06:17.378896   10756 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 20:06:17.443322   10756 provision.go:86] duration metric: configureAuth took 14.305456s
	I0229 20:06:17.443322   10756 buildroot.go:189] setting minikube options for container-runtime
	I0229 20:06:17.444151   10756 config.go:182] Loaded profile config "calico-863900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 20:06:17.444257   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:06:19.648106   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:06:19.648106   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:19.648106   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:06:22.209145   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:06:22.209145   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:22.213732   10756 main.go:141] libmachine: Using SSH client type: native
	I0229 20:06:22.214139   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.60.231 22 <nil> <nil>}
	I0229 20:06:22.214228   10756 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 20:06:22.365983   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 20:06:22.365983   10756 buildroot.go:70] root file system type: tmpfs
	I0229 20:06:22.366203   10756 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 20:06:22.366340   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:06:24.463778   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:06:24.463778   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:24.463778   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:06:26.950706   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:06:26.950706   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:26.955822   10756 main.go:141] libmachine: Using SSH client type: native
	I0229 20:06:26.956339   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.60.231 22 <nil> <nil>}
	I0229 20:06:26.956511   10756 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 20:06:27.126013   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 20:06:27.126013   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:06:29.262545   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:06:29.263238   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:29.263366   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:06:31.834096   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:06:31.835140   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:31.839928   10756 main.go:141] libmachine: Using SSH client type: native
	I0229 20:06:31.840360   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.60.231 22 <nil> <nil>}
	I0229 20:06:31.840360   10756 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 20:06:33.011099   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 20:06:33.011099   10756 machine.go:91] provisioned docker machine in 39.4251664s
	I0229 20:06:33.011099   10756 client.go:171] LocalClient.Create took 1m47.2428568s
	I0229 20:06:33.011099   10756 start.go:167] duration metric: libmachine.API.Create for "calico-863900" took 1m47.2429415s
	I0229 20:06:33.011099   10756 start.go:300] post-start starting for "calico-863900" (driver="hyperv")
	I0229 20:06:33.011099   10756 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 20:06:33.022838   10756 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 20:06:33.022838   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:06:35.093582   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:06:35.093582   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:35.094041   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:06:37.572144   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:06:37.572144   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:37.572144   10756 sshutil.go:53] new ssh client: &{IP:172.26.60.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-863900\id_rsa Username:docker}
	I0229 20:06:37.676371   10756 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6532734s)
	I0229 20:06:37.690597   10756 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 20:06:37.697631   10756 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 20:06:37.697631   10756 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 20:06:37.698260   10756 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 20:06:37.698875   10756 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> 43562.pem in /etc/ssl/certs
	I0229 20:06:37.708791   10756 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 20:06:37.727677   10756 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /etc/ssl/certs/43562.pem (1708 bytes)
	I0229 20:06:37.782322   10756 start.go:303] post-start completed in 4.7709567s
	I0229 20:06:37.785306   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:06:39.920645   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:06:39.920645   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:39.920645   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:06:42.416744   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:06:42.416852   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:42.417037   10756 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\calico-863900\config.json ...
	I0229 20:06:42.419804   10756 start.go:128] duration metric: createHost completed in 1m56.6522629s
	I0229 20:06:42.419804   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:06:44.469700   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:06:44.469700   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:44.469910   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:06:46.986296   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:06:46.986296   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:46.990978   10756 main.go:141] libmachine: Using SSH client type: native
	I0229 20:06:46.991864   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.60.231 22 <nil> <nil>}
	I0229 20:06:46.991921   10756 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 20:06:47.154428   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709237207.310319283
	
	I0229 20:06:47.154428   10756 fix.go:206] guest clock: 1709237207.310319283
	I0229 20:06:47.154497   10756 fix.go:219] Guest: 2024-02-29 20:06:47.310319283 +0000 UTC Remote: 2024-02-29 20:06:42.4198043 +0000 UTC m=+386.351679001 (delta=4.890514983s)
	I0229 20:06:47.154559   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:06:49.385126   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:06:49.385126   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:49.385126   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:06:51.887839   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:06:51.887839   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:51.891519   10756 main.go:141] libmachine: Using SSH client type: native
	I0229 20:06:51.892124   10756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.60.231 22 <nil> <nil>}
	I0229 20:06:51.892124   10756 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709237207
	I0229 20:06:52.046712   10756 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 20:06:47 UTC 2024
	
	I0229 20:06:52.046712   10756 fix.go:226] clock set: Thu Feb 29 20:06:47 UTC 2024
	 (err=<nil>)
	I0229 20:06:52.046712   10756 start.go:83] releasing machines lock for "calico-863900", held for 2m6.2791758s
	I0229 20:06:52.046712   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:06:54.307022   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:06:54.308017   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:54.308084   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:06:56.971704   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:06:56.971777   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:56.975529   10756 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 20:06:56.975529   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:06:56.983505   10756 ssh_runner.go:195] Run: cat /version.json
	I0229 20:06:56.983505   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM calico-863900 ).state
	I0229 20:06:59.452472   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:06:59.452535   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:59.452599   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:06:59.519699   10756 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:06:59.520701   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:06:59.520743   10756 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM calico-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:07:02.307608   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:07:02.308512   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:07:02.308895   10756 sshutil.go:53] new ssh client: &{IP:172.26.60.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-863900\id_rsa Username:docker}
	I0229 20:07:02.379397   10756 main.go:141] libmachine: [stdout =====>] : 172.26.60.231
	
	I0229 20:07:02.379397   10756 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:07:02.380153   10756 sshutil.go:53] new ssh client: &{IP:172.26.60.231 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\calico-863900\id_rsa Username:docker}
	I0229 20:07:02.495894   10756 ssh_runner.go:235] Completed: cat /version.json: (5.5120822s)
	I0229 20:07:02.496423   10756 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.5200581s)
	I0229 20:07:02.507742   10756 ssh_runner.go:195] Run: systemctl --version
	I0229 20:07:02.534478   10756 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 20:07:02.546147   10756 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 20:07:02.558533   10756 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 20:07:02.593385   10756 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 20:07:02.593385   10756 start.go:475] detecting cgroup driver to use...
	I0229 20:07:02.593638   10756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 20:07:02.650502   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 20:07:02.688708   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 20:07:02.710452   10756 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 20:07:02.720330   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 20:07:02.752077   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 20:07:02.783732   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 20:07:02.815332   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 20:07:02.847071   10756 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 20:07:02.878527   10756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 20:07:02.912543   10756 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 20:07:02.942643   10756 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 20:07:02.978802   10756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 20:07:03.178922   10756 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 20:07:03.211783   10756 start.go:475] detecting cgroup driver to use...
	I0229 20:07:03.227711   10756 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 20:07:03.276265   10756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 20:07:03.315925   10756 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 20:07:03.388095   10756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 20:07:03.424802   10756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 20:07:03.467223   10756 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 20:07:03.535335   10756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 20:07:03.560818   10756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 20:07:03.612514   10756 ssh_runner.go:195] Run: which cri-dockerd
	I0229 20:07:03.628597   10756 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 20:07:03.645596   10756 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 20:07:03.687795   10756 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 20:07:03.892313   10756 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 20:07:04.075836   10756 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 20:07:04.075836   10756 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 20:07:04.122026   10756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 20:07:04.331261   10756 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 20:08:05.467496   10756 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1328299s)
	I0229 20:08:05.476682   10756 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0229 20:08:05.511057   10756 out.go:177] 
	W0229 20:08:05.512099   10756 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 20:06:32 calico-863900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 20:06:32 calico-863900 dockerd[652]: time="2024-02-29T20:06:32.666154570Z" level=info msg="Starting up"
	Feb 29 20:06:32 calico-863900 dockerd[652]: time="2024-02-29T20:06:32.666970203Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 20:06:32 calico-863900 dockerd[652]: time="2024-02-29T20:06:32.668244242Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.700623697Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.728684238Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.728799215Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.728869800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.728885897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.728988076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.729003273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.729409489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.729510269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.729531664Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.729543362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.729650340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.729974973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.733732702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.733871474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.734153816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.734311083Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.734464752Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.734660712Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.734788985Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.746334716Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.746417399Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.746448493Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.746478286Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.746506481Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.746734234Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747357306Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747620852Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747685739Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747713533Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747741327Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747767222Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747791317Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747817112Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747851905Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747876599Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747905294Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747926989Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747960782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747984977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748009872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748033867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748056363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748131847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748250623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748287615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748312010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748338905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748360500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748382596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748411790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748440584Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748505870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748669237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748699531Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748771816Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748902789Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748928484Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748949879Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.749292609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.749453076Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.749482570Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.749868091Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.750143934Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.750366488Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.750635933Z" level=info msg="containerd successfully booted in 0.051751s"
	Feb 29 20:06:32 calico-863900 dockerd[652]: time="2024-02-29T20:06:32.789895376Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 20:06:32 calico-863900 dockerd[652]: time="2024-02-29T20:06:32.804683941Z" level=info msg="Loading containers: start."
	Feb 29 20:06:33 calico-863900 dockerd[652]: time="2024-02-29T20:06:33.092640975Z" level=info msg="Loading containers: done."
	Feb 29 20:06:33 calico-863900 dockerd[652]: time="2024-02-29T20:06:33.108656485Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 20:06:33 calico-863900 dockerd[652]: time="2024-02-29T20:06:33.109063107Z" level=info msg="Daemon has completed initialization"
	Feb 29 20:06:33 calico-863900 dockerd[652]: time="2024-02-29T20:06:33.166536519Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 20:06:33 calico-863900 dockerd[652]: time="2024-02-29T20:06:33.166780672Z" level=info msg="API listen on [::]:2376"
	Feb 29 20:06:33 calico-863900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 20:07:04 calico-863900 dockerd[652]: time="2024-02-29T20:07:04.513707859Z" level=info msg="Processing signal 'terminated'"
	Feb 29 20:07:04 calico-863900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 20:07:04 calico-863900 dockerd[652]: time="2024-02-29T20:07:04.514993913Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 20:07:04 calico-863900 dockerd[652]: time="2024-02-29T20:07:04.515786685Z" level=info msg="Daemon shutdown complete"
	Feb 29 20:07:04 calico-863900 dockerd[652]: time="2024-02-29T20:07:04.515835483Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 20:07:04 calico-863900 dockerd[652]: time="2024-02-29T20:07:04.515972678Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 20:07:05 calico-863900 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 20:07:05 calico-863900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 20:07:05 calico-863900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 20:07:05 calico-863900 dockerd[990]: time="2024-02-29T20:07:05.612472771Z" level=info msg="Starting up"
	Feb 29 20:08:05 calico-863900 dockerd[990]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 20:08:05 calico-863900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 20:08:05 calico-863900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 20:08:05 calico-863900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Feb 29 20:06:32 calico-863900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 20:06:32 calico-863900 dockerd[652]: time="2024-02-29T20:06:32.666154570Z" level=info msg="Starting up"
	Feb 29 20:06:32 calico-863900 dockerd[652]: time="2024-02-29T20:06:32.666970203Z" level=info msg="containerd not running, starting managed containerd"
	Feb 29 20:06:32 calico-863900 dockerd[652]: time="2024-02-29T20:06:32.668244242Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.700623697Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.728684238Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.728799215Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.728869800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.728885897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.728988076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.729003273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.729409489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.729510269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.729531664Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.729543362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.729650340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.729974973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.733732702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.733871474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.734153816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.734311083Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.734464752Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.734660712Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.734788985Z" level=info msg="metadata content store policy set" policy=shared
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.746334716Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.746417399Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.746448493Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.746478286Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.746506481Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.746734234Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747357306Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747620852Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747685739Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747713533Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747741327Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747767222Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747791317Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747817112Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747851905Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747876599Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747905294Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747926989Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747960782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.747984977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748009872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748033867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748056363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748131847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748250623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748287615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748312010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748338905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748360500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748382596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748411790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748440584Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748505870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748669237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748699531Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748771816Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748902789Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748928484Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.748949879Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.749292609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.749453076Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.749482570Z" level=info msg="NRI interface is disabled by configuration."
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.749868091Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.750143934Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.750366488Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Feb 29 20:06:32 calico-863900 dockerd[658]: time="2024-02-29T20:06:32.750635933Z" level=info msg="containerd successfully booted in 0.051751s"
	Feb 29 20:06:32 calico-863900 dockerd[652]: time="2024-02-29T20:06:32.789895376Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 29 20:06:32 calico-863900 dockerd[652]: time="2024-02-29T20:06:32.804683941Z" level=info msg="Loading containers: start."
	Feb 29 20:06:33 calico-863900 dockerd[652]: time="2024-02-29T20:06:33.092640975Z" level=info msg="Loading containers: done."
	Feb 29 20:06:33 calico-863900 dockerd[652]: time="2024-02-29T20:06:33.108656485Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 29 20:06:33 calico-863900 dockerd[652]: time="2024-02-29T20:06:33.109063107Z" level=info msg="Daemon has completed initialization"
	Feb 29 20:06:33 calico-863900 dockerd[652]: time="2024-02-29T20:06:33.166536519Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 29 20:06:33 calico-863900 dockerd[652]: time="2024-02-29T20:06:33.166780672Z" level=info msg="API listen on [::]:2376"
	Feb 29 20:06:33 calico-863900 systemd[1]: Started Docker Application Container Engine.
	Feb 29 20:07:04 calico-863900 dockerd[652]: time="2024-02-29T20:07:04.513707859Z" level=info msg="Processing signal 'terminated'"
	Feb 29 20:07:04 calico-863900 systemd[1]: Stopping Docker Application Container Engine...
	Feb 29 20:07:04 calico-863900 dockerd[652]: time="2024-02-29T20:07:04.514993913Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Feb 29 20:07:04 calico-863900 dockerd[652]: time="2024-02-29T20:07:04.515786685Z" level=info msg="Daemon shutdown complete"
	Feb 29 20:07:04 calico-863900 dockerd[652]: time="2024-02-29T20:07:04.515835483Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Feb 29 20:07:04 calico-863900 dockerd[652]: time="2024-02-29T20:07:04.515972678Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Feb 29 20:07:05 calico-863900 systemd[1]: docker.service: Deactivated successfully.
	Feb 29 20:07:05 calico-863900 systemd[1]: Stopped Docker Application Container Engine.
	Feb 29 20:07:05 calico-863900 systemd[1]: Starting Docker Application Container Engine...
	Feb 29 20:07:05 calico-863900 dockerd[990]: time="2024-02-29T20:07:05.612472771Z" level=info msg="Starting up"
	Feb 29 20:08:05 calico-863900 dockerd[990]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Feb 29 20:08:05 calico-863900 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Feb 29 20:08:05 calico-863900 systemd[1]: docker.service: Failed with result 'exit-code'.
	Feb 29 20:08:05 calico-863900 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0229 20:08:05.512659   10756 out.go:239] * 
	* 
	W0229 20:08:05.514544   10756 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 20:08:05.515219   10756 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/calico/Start (469.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (372.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperv
net_test.go:112: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p bridge-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperv: exit status 93 (6m12.5627104s)

                                                
                                                
-- stdout --
	* [bridge-863900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node bridge-863900 in cluster bridge-863900
	* Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 20:17:41.801811    6340 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 20:17:41.881849    6340 out.go:291] Setting OutFile to fd 680 ...
	I0229 20:17:41.882389    6340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 20:17:41.882389    6340 out.go:304] Setting ErrFile to fd 1380...
	I0229 20:17:41.882389    6340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 20:17:41.910898    6340 out.go:298] Setting JSON to false
	I0229 20:17:41.915984    6340 start.go:129] hostinfo: {"hostname":"minikube5","uptime":59598,"bootTime":1709178263,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 20:17:41.918648    6340 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 20:17:41.921016    6340 out.go:177] * [bridge-863900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 20:17:41.921744    6340 notify.go:220] Checking for updates...
	I0229 20:17:41.922468    6340 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 20:17:41.923196    6340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 20:17:41.923867    6340 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 20:17:41.924579    6340 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 20:17:41.925187    6340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 20:17:41.928039    6340 config.go:182] Loaded profile config "enable-default-cni-863900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 20:17:41.928671    6340 config.go:182] Loaded profile config "false-863900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 20:17:41.929347    6340 config.go:182] Loaded profile config "flannel-863900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 20:17:41.929602    6340 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 20:17:47.714118    6340 out.go:177] * Using the hyperv driver based on user configuration
	I0229 20:17:47.714854    6340 start.go:299] selected driver: hyperv
	I0229 20:17:47.714854    6340 start.go:903] validating driver "hyperv" against <nil>
	I0229 20:17:47.714854    6340 start.go:914] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0229 20:17:47.761939    6340 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 20:17:47.763457    6340 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0229 20:17:47.763457    6340 cni.go:84] Creating CNI manager for "bridge"
	I0229 20:17:47.763457    6340 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 20:17:47.763457    6340 start_flags.go:323] config:
	{Name:bridge-863900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-863900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 20:17:47.764307    6340 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 20:17:47.765489    6340 out.go:177] * Starting control plane node bridge-863900 in cluster bridge-863900
	I0229 20:17:47.766495    6340 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 20:17:47.766495    6340 preload.go:148] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 20:17:47.766495    6340 cache.go:56] Caching tarball of preloaded images
	I0229 20:17:47.766495    6340 preload.go:174] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0229 20:17:47.766495    6340 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 20:17:47.766495    6340 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\bridge-863900\config.json ...
	I0229 20:17:47.767536    6340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\bridge-863900\config.json: {Name:mk83ef3132a3d16198c1b2421b0f50e86bcdc2a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 20:17:47.768391    6340 start.go:365] acquiring machines lock for bridge-863900: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0229 20:18:00.102969    6340 start.go:369] acquired machines lock for "bridge-863900" in 12.3338339s
	I0229 20:18:00.103260    6340 start.go:93] Provisioning new machine with config: &{Name:bridge-863900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:bridge-863900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0229 20:18:00.103520    6340 start.go:125] createHost starting for "" (driver="hyperv")
	I0229 20:18:00.104731    6340 out.go:204] * Creating hyperv VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0229 20:18:00.105194    6340 start.go:159] libmachine.API.Create for "bridge-863900" (driver="hyperv")
	I0229 20:18:00.105256    6340 client.go:168] LocalClient.Create starting
	I0229 20:18:00.105704    6340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0229 20:18:00.106046    6340 main.go:141] libmachine: Decoding PEM data...
	I0229 20:18:00.106103    6340 main.go:141] libmachine: Parsing certificate...
	I0229 20:18:00.106103    6340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0229 20:18:00.106329    6340 main.go:141] libmachine: Decoding PEM data...
	I0229 20:18:00.106329    6340 main.go:141] libmachine: Parsing certificate...
	I0229 20:18:00.106329    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0229 20:18:02.276233    6340 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0229 20:18:02.276233    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:02.276233    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0229 20:18:04.083806    6340 main.go:141] libmachine: [stdout =====>] : False
	
	I0229 20:18:04.092612    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:04.092700    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 20:18:06.031562    6340 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 20:18:06.031562    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:06.031562    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 20:18:09.641227    6340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 20:18:09.653550    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:09.655898    6340 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1708638130-18020-amd64.iso...
	I0229 20:18:10.038080    6340 main.go:141] libmachine: Creating SSH key...
	I0229 20:18:10.244763    6340 main.go:141] libmachine: Creating VM...
	I0229 20:18:10.244763    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0229 20:18:13.542779    6340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0229 20:18:13.542884    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:13.543247    6340 main.go:141] libmachine: Using switch "Default Switch"
	I0229 20:18:13.543339    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0229 20:18:15.432576    6340 main.go:141] libmachine: [stdout =====>] : True
	
	I0229 20:18:15.432576    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:15.432671    6340 main.go:141] libmachine: Creating VHD
	I0229 20:18:15.432750    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\bridge-863900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0229 20:18:19.993298    6340 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\bridge-863900\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 3135EFB0-CB5B-4517-A75D-7B00F6753163
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0229 20:18:19.993388    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:19.993388    6340 main.go:141] libmachine: Writing magic tar header
	I0229 20:18:19.993476    6340 main.go:141] libmachine: Writing SSH key tar header
	I0229 20:18:20.002387    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\bridge-863900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\bridge-863900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0229 20:18:23.433419    6340 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:18:23.433741    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:23.433741    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\bridge-863900\disk.vhd' -SizeBytes 20000MB
	I0229 20:18:26.107330    6340 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:18:26.107362    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:26.107406    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM bridge-863900 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\bridge-863900' -SwitchName 'Default Switch' -MemoryStartupBytes 3072MB
	I0229 20:18:31.077285    6340 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	bridge-863900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0229 20:18:31.077285    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:31.077504    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName bridge-863900 -DynamicMemoryEnabled $false
	I0229 20:18:33.469540    6340 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:18:33.478017    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:33.478017    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor bridge-863900 -Count 2
	I0229 20:18:35.849729    6340 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:18:35.859812    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:35.859812    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName bridge-863900 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\bridge-863900\boot2docker.iso'
	I0229 20:18:38.497218    6340 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:18:38.501978    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:38.501978    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName bridge-863900 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\bridge-863900\disk.vhd'
	I0229 20:18:41.382121    6340 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:18:41.382121    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:41.382121    6340 main.go:141] libmachine: Starting VM...
	I0229 20:18:41.392923    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM bridge-863900
	I0229 20:18:44.554716    6340 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:18:44.554786    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:44.554786    6340 main.go:141] libmachine: Waiting for host to start...
	I0229 20:18:44.554841    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:18:47.112689    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:18:47.112769    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:47.112856    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:18:49.944192    6340 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:18:49.944192    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:50.949755    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:18:53.429254    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:18:53.429410    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:53.429559    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:18:56.275871    6340 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:18:56.285974    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:57.295627    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:18:59.653125    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:18:59.653125    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:18:59.661125    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:19:02.652516    6340 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:19:02.652516    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:03.659037    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:19:06.523391    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:19:06.535787    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:06.535787    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:19:09.895949    6340 main.go:141] libmachine: [stdout =====>] : 
	I0229 20:19:09.895949    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:10.909872    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:19:13.521325    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:19:13.521325    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:13.521884    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:19:16.361152    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:19:16.361152    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:16.361152    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:19:18.595306    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:19:18.595522    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:18.595522    6340 machine.go:88] provisioning docker machine ...
	I0229 20:19:18.595657    6340 buildroot.go:166] provisioning hostname "bridge-863900"
	I0229 20:19:18.595744    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:19:20.863838    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:19:20.869704    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:20.869704    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:19:23.469705    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:19:23.469769    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:23.475263    6340 main.go:141] libmachine: Using SSH client type: native
	I0229 20:19:23.486264    6340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.60.17 22 <nil> <nil>}
	I0229 20:19:23.486264    6340 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-863900 && echo "bridge-863900" | sudo tee /etc/hostname
	I0229 20:19:23.657342    6340 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-863900
	
	I0229 20:19:23.657342    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:19:25.854104    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:19:25.854104    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:25.866445    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:19:28.862596    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:19:28.862596    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:28.869016    6340 main.go:141] libmachine: Using SSH client type: native
	I0229 20:19:28.869672    6340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.60.17 22 <nil> <nil>}
	I0229 20:19:28.869672    6340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-863900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-863900/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-863900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0229 20:19:29.039595    6340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0229 20:19:29.039595    6340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0229 20:19:29.039755    6340 buildroot.go:174] setting up certificates
	I0229 20:19:29.039755    6340 provision.go:83] configureAuth start
	I0229 20:19:29.039755    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:19:31.554303    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:19:31.554303    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:31.554303    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:19:34.310214    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:19:34.310245    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:34.310391    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:19:36.712889    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:19:36.712960    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:36.712960    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:19:39.423073    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:19:39.423073    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:39.423192    6340 provision.go:138] copyHostCerts
	I0229 20:19:39.423740    6340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0229 20:19:39.423740    6340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0229 20:19:39.424280    6340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0229 20:19:39.425638    6340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0229 20:19:39.425638    6340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0229 20:19:39.426057    6340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0229 20:19:39.427713    6340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0229 20:19:39.427765    6340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0229 20:19:39.428370    6340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0229 20:19:39.429725    6340 provision.go:112] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.bridge-863900 san=[172.26.60.17 172.26.60.17 localhost 127.0.0.1 minikube bridge-863900]
	I0229 20:19:39.738519    6340 provision.go:172] copyRemoteCerts
	I0229 20:19:39.751097    6340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0229 20:19:39.751168    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:19:42.052383    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:19:42.052383    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:42.052383    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:19:44.600940    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:19:44.600940    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:44.601474    6340 sshutil.go:53] new ssh client: &{IP:172.26.60.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\bridge-863900\id_rsa Username:docker}
	I0229 20:19:44.719488    6340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9680435s)
	I0229 20:19:44.719488    6340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0229 20:19:44.773889    6340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0229 20:19:44.828333    6340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0229 20:19:44.875357    6340 provision.go:86] duration metric: configureAuth took 15.8347225s
	I0229 20:19:44.875357    6340 buildroot.go:189] setting minikube options for container-runtime
	I0229 20:19:44.876130    6340 config.go:182] Loaded profile config "bridge-863900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 20:19:44.876130    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:19:47.007238    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:19:47.007238    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:47.007238    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:19:49.573913    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:19:49.574022    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:49.580867    6340 main.go:141] libmachine: Using SSH client type: native
	I0229 20:19:49.581525    6340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.60.17 22 <nil> <nil>}
	I0229 20:19:49.581525    6340 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0229 20:19:49.722413    6340 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0229 20:19:49.722413    6340 buildroot.go:70] root file system type: tmpfs
	I0229 20:19:49.722413    6340 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0229 20:19:49.722413    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:19:51.922595    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:19:51.922595    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:51.922595    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:19:54.619239    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:19:54.619239    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:54.625617    6340 main.go:141] libmachine: Using SSH client type: native
	I0229 20:19:54.626203    6340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.60.17 22 <nil> <nil>}
	I0229 20:19:54.626440    6340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0229 20:19:54.800431    6340 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0229 20:19:54.800431    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:19:56.980843    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:19:56.980843    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:56.980843    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:19:59.875666    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:19:59.875666    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:19:59.881174    6340 main.go:141] libmachine: Using SSH client type: native
	I0229 20:19:59.881723    6340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.60.17 22 <nil> <nil>}
	I0229 20:19:59.881825    6340 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0229 20:20:01.190008    6340 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0229 20:20:01.190067    6340 machine.go:91] provisioned docker machine in 42.5921195s
	I0229 20:20:01.190067    6340 client.go:171] LocalClient.Create took 2m1.0780891s
	I0229 20:20:01.190067    6340 start.go:167] duration metric: libmachine.API.Create for "bridge-863900" took 2m1.0782238s
	I0229 20:20:01.190186    6340 start.go:300] post-start starting for "bridge-863900" (driver="hyperv")
	I0229 20:20:01.190253    6340 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0229 20:20:01.207218    6340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0229 20:20:01.207218    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:20:03.546230    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:20:03.547053    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:20:03.548685    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:20:06.439951    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:20:06.449437    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:20:06.450114    6340 sshutil.go:53] new ssh client: &{IP:172.26.60.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\bridge-863900\id_rsa Username:docker}
	I0229 20:20:06.559940    6340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.3524249s)
	I0229 20:20:06.575278    6340 ssh_runner.go:195] Run: cat /etc/os-release
	I0229 20:20:06.582875    6340 info.go:137] Remote host: Buildroot 2023.02.9
	I0229 20:20:06.582875    6340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0229 20:20:06.582875    6340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0229 20:20:06.584584    6340 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem -> 43562.pem in /etc/ssl/certs
	I0229 20:20:06.596721    6340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0229 20:20:06.617665    6340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\43562.pem --> /etc/ssl/certs/43562.pem (1708 bytes)
	I0229 20:20:06.679660    6340 start.go:303] post-start completed in 5.4891023s
	I0229 20:20:06.681635    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:20:09.106281    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:20:09.106356    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:20:09.106356    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:20:11.956642    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:20:11.956709    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:20:11.956709    6340 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\bridge-863900\config.json ...
	I0229 20:20:11.959944    6340 start.go:128] duration metric: createHost completed in 2m11.8490444s
	I0229 20:20:11.959944    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:20:14.318476    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:20:14.318536    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:20:14.318572    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:20:17.092946    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:20:17.092946    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:20:17.098319    6340 main.go:141] libmachine: Using SSH client type: native
	I0229 20:20:17.098989    6340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.60.17 22 <nil> <nil>}
	I0229 20:20:17.099055    6340 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0229 20:20:17.241188    6340 main.go:141] libmachine: SSH cmd err, output: <nil>: 1709238017.391751617
	
	I0229 20:20:17.241188    6340 fix.go:206] guest clock: 1709238017.391751617
	I0229 20:20:17.241188    6340 fix.go:219] Guest: 2024-02-29 20:20:17.391751617 +0000 UTC Remote: 2024-02-29 20:20:11.9599441 +0000 UTC m=+150.230130801 (delta=5.431807517s)
	I0229 20:20:17.241188    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:20:19.612701    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:20:19.612701    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:20:19.613232    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:20:22.319305    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:20:22.319390    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:20:22.326497    6340 main.go:141] libmachine: Using SSH client type: native
	I0229 20:20:22.327138    6340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbe9d80] 0xbec960 <nil>  [] 0s} 172.26.60.17 22 <nil> <nil>}
	I0229 20:20:22.327138    6340 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1709238017
	I0229 20:20:22.472055    6340 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Feb 29 20:20:17 UTC 2024
	
	I0229 20:20:22.472178    6340 fix.go:226] clock set: Thu Feb 29 20:20:17 UTC 2024
	 (err=<nil>)
	I0229 20:20:22.472178    6340 start.go:83] releasing machines lock for "bridge-863900", held for 2m22.3613034s
	I0229 20:20:22.472178    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:20:24.820631    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:20:24.820631    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:20:24.820711    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:20:27.676468    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:20:27.676615    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:20:27.681471    6340 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0229 20:20:27.681654    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:20:27.689377    6340 ssh_runner.go:195] Run: cat /version.json
	I0229 20:20:27.689377    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:20:30.237680    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:20:30.242737    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:20:30.242829    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:20:30.250506    6340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 20:20:30.250506    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:20:30.250614    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM bridge-863900 ).networkadapters[0]).ipaddresses[0]
	I0229 20:20:33.204108    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:20:33.204108    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:20:33.204183    6340 sshutil.go:53] new ssh client: &{IP:172.26.60.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\bridge-863900\id_rsa Username:docker}
	I0229 20:20:33.235911    6340 main.go:141] libmachine: [stdout =====>] : 172.26.60.17
	
	I0229 20:20:33.235986    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:20:33.236495    6340 sshutil.go:53] new ssh client: &{IP:172.26.60.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\bridge-863900\id_rsa Username:docker}
	I0229 20:20:33.420064    6340 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.7381863s)
	I0229 20:20:33.420143    6340 ssh_runner.go:235] Completed: cat /version.json: (5.7303684s)
	I0229 20:20:33.435395    6340 ssh_runner.go:195] Run: systemctl --version
	I0229 20:20:33.458496    6340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0229 20:20:33.468843    6340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0229 20:20:33.479835    6340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0229 20:20:33.513475    6340 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0229 20:20:33.513475    6340 start.go:475] detecting cgroup driver to use...
	I0229 20:20:33.513475    6340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 20:20:33.576423    6340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0229 20:20:33.612818    6340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0229 20:20:33.634709    6340 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0229 20:20:33.645012    6340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0229 20:20:33.688978    6340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 20:20:33.720534    6340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0229 20:20:33.757703    6340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0229 20:20:33.795545    6340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0229 20:20:33.834340    6340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0229 20:20:33.874012    6340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0229 20:20:33.917271    6340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0229 20:20:33.958536    6340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 20:20:34.214167    6340 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0229 20:20:34.253321    6340 start.go:475] detecting cgroup driver to use...
	I0229 20:20:34.268605    6340 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0229 20:20:34.322770    6340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 20:20:34.360675    6340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0229 20:20:34.405963    6340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0229 20:20:34.447671    6340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 20:20:34.492520    6340 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0229 20:20:34.582301    6340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0229 20:20:34.608118    6340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0229 20:20:34.678114    6340 ssh_runner.go:195] Run: which cri-dockerd
	I0229 20:20:34.703137    6340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0229 20:20:34.723972    6340 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0229 20:20:34.785223    6340 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0229 20:20:35.047734    6340 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0229 20:20:35.306067    6340 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0229 20:20:35.306341    6340 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0229 20:20:35.400734    6340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 20:20:35.621052    6340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 20:20:37.232858    6340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.611717s)
	I0229 20:20:37.246174    6340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0229 20:20:37.289802    6340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 20:20:37.326509    6340 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0229 20:20:37.532818    6340 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0229 20:20:37.744793    6340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 20:20:37.959547    6340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0229 20:20:38.001842    6340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0229 20:20:38.048182    6340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 20:20:38.293605    6340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0229 20:20:38.408163    6340 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0229 20:20:38.422874    6340 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0229 20:20:38.434953    6340 start.go:543] Will wait 60s for crictl version
	I0229 20:20:38.446159    6340 ssh_runner.go:195] Run: which crictl
	I0229 20:20:38.462016    6340 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0229 20:20:38.543058    6340 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0229 20:20:38.555583    6340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 20:20:38.602860    6340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0229 20:20:38.645650    6340 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0229 20:20:38.645821    6340 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0229 20:20:38.649940    6340 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0229 20:20:38.649940    6340 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0229 20:20:38.649940    6340 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0229 20:20:38.649940    6340 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:76:3f:19 Flags:up|broadcast|multicast|running}
	I0229 20:20:38.653070    6340 ip.go:210] interface addr: fe80::841a:4367:8c9:abc/64
	I0229 20:20:38.653209    6340 ip.go:210] interface addr: 172.26.48.1/20
	I0229 20:20:38.662956    6340 ssh_runner.go:195] Run: grep 172.26.48.1	host.minikube.internal$ /etc/hosts
	I0229 20:20:38.673760    6340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.26.48.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0229 20:20:38.700744    6340 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 20:20:38.709565    6340 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 20:20:38.737922    6340 docker.go:685] Got preloaded images: 
	I0229 20:20:38.737922    6340 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0229 20:20:38.750344    6340 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 20:20:38.783174    6340 ssh_runner.go:195] Run: which lz4
	I0229 20:20:38.812907    6340 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0229 20:20:38.820421    6340 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0229 20:20:38.820421    6340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0229 20:20:42.173355    6340 docker.go:649] Took 3.377448 seconds to copy over tarball
	I0229 20:20:42.189917    6340 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0229 20:20:50.744125    6340 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5537325s)
	I0229 20:20:50.744125    6340 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0229 20:20:50.819132    6340 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0229 20:20:50.843874    6340 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0229 20:20:50.904771    6340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0229 20:20:51.163224    6340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0229 20:23:36.358400    6340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2m45.1859813s)
	I0229 20:23:36.367466    6340 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	W0229 20:23:36.367466    6340 ssh_runner.go:129] session error, resetting client: read tcp 172.26.48.1:54936->172.26.60.17:22: wsarecv: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
	I0229 20:23:36.367466    6340 retry.go:31] will retry after 296.070004ms: read tcp 172.26.48.1:54936->172.26.60.17:22: wsarecv: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
	I0229 20:23:36.664581    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:38.750259    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:38.750356    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:38.750356    6340 kubeadm.go:936] preload failed, will try to load cached images: sudo systemctl restart docker: wait: remote command exited without exit status or exit signal
	stdout:
	
	stderr:
	
	failed to get journalctl logs: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: Host is not running
	I0229 20:23:38.757259    6340 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0229 20:23:38.757259    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:40.953637    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:40.953727    6340 main.go:141] libmachine: [stderr =====>] : 
	W0229 20:23:40.953768    6340 docker.go:676] NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: Host is not running
	I0229 20:23:40.953768    6340 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.4 registry.k8s.io/kube-controller-manager:v1.28.4 registry.k8s.io/kube-scheduler:v1.28.4 registry.k8s.io/kube-proxy:v1.28.4 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0229 20:23:40.971327    6340 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 20:23:40.989144    6340 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0229 20:23:40.992446    6340 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 20:23:40.994129    6340 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.4
	I0229 20:23:41.000251    6340 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.4
	I0229 20:23:41.000304    6340 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.4
	I0229 20:23:41.000304    6340 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0229 20:23:41.001427    6340 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 20:23:41.003000    6340 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0229 20:23:41.019180    6340 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.4
	I0229 20:23:41.019180    6340 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0229 20:23:41.020015    6340 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0229 20:23:41.021274    6340 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.4
	I0229 20:23:41.021475    6340 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 20:23:41.031645    6340 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.4
	I0229 20:23:41.031753    6340 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	W0229 20:23:41.112834    6340 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 20:23:41.210028    6340 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.28.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 20:23:41.290380    6340 image.go:187] authn lookup for registry.k8s.io/coredns/coredns:v1.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 20:23:41.368733    6340 image.go:187] authn lookup for registry.k8s.io/etcd:3.5.9-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0229 20:23:41.450596    6340 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.28.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 20:23:41.475777    6340 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 20:23:41.476313    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	W0229 20:23:41.546651    6340 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.28.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 20:23:41.664064    6340 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0229 20:23:41.664155    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	W0229 20:23:41.690059    6340 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.28.4 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 20:23:41.692932    6340 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0229 20:23:41.693038    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:41.744913    6340 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.4
	I0229 20:23:41.744913    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:41.762764    6340 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.4
	I0229 20:23:41.762764    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	W0229 20:23:41.805734    6340 image.go:187] authn lookup for registry.k8s.io/pause:3.9 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0229 20:23:41.840969    6340 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 20:23:41.840969    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:42.002666    6340 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.4
	I0229 20:23:42.002666    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:42.011963    6340 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0229 20:23:42.011963    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:45.282438    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:45.282438    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:45.282438    6340 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0229 20:23:45.285471    6340 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0229 20:23:45.286010    6340 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 20:23:45.297356    6340 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0229 20:23:45.297356    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:45.462029    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:45.462029    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:45.462029    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:45.462029    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:45.462029    6340 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0229 20:23:45.462029    6340 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.5.9-0 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.9-0
	I0229 20:23:45.462029    6340 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0229 20:23:45.465211    6340 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.4" needs transfer: "registry.k8s.io/kube-proxy:v1.28.4" does not exist at hash "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e" in container runtime
	I0229 20:23:45.465211    6340 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.28.4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.28.4
	I0229 20:23:45.465211    6340 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.28.4
	I0229 20:23:45.473911    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:45.473911    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:45.473911    6340 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0229 20:23:45.473911    6340 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.10.1 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.10.1
	I0229 20:23:45.474801    6340 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0229 20:23:45.479970    6340 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.28.4
	I0229 20:23:45.479970    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:45.481002    6340 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.9-0
	I0229 20:23:45.482606    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:45.499751    6340 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0229 20:23:45.499751    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:45.584593    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:45.584593    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:45.584593    6340 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.4" does not exist at hash "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257" in container runtime
	I0229 20:23:45.584593    6340 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.28.4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.28.4
	I0229 20:23:45.584593    6340 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.28.4
	I0229 20:23:45.597909    6340 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.28.4
	I0229 20:23:45.597909    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:45.854509    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:45.854509    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:45.870463    6340 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c" in container runtime
	I0229 20:23:45.870463    6340 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.9 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.9
	I0229 20:23:45.870463    6340 docker.go:337] Removing image: registry.k8s.io/pause:3.9
	I0229 20:23:45.880168    6340 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.9
	I0229 20:23:45.880168    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:45.891822    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:45.891822    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:45.891822    6340 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.4" does not exist at hash "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1" in container runtime
	I0229 20:23:45.891822    6340 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.28.4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.28.4
	I0229 20:23:45.891822    6340 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.28.4
	I0229 20:23:45.901157    6340 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.28.4
	I0229 20:23:45.901157    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:45.919907    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:45.919907    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:45.919907    6340 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.4" does not exist at hash "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591" in container runtime
	I0229 20:23:45.919907    6340 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.28.4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.28.4
	I0229 20:23:45.919907    6340 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 20:23:45.936866    6340 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.28.4
	I0229 20:23:45.936866    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:48.989499    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:48.989499    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:49.112027    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:49.112027    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:49.235593    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:49.235593    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:49.263325    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:49.263410    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:49.296494    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:49.310800    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:49.510545    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:49.510545    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:49.553471    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:49.565434    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:49.688400    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:49.688400    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:49.688400    6340 cache_images.go:92] LoadImages completed in 8.7341453s
	W0229 20:23:49.688956    6340 out.go:239] X Unable to load cached images: loading cached images: removing image: remove image docker: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: Host is not running
	X Unable to load cached images: loading cached images: removing image: remove image docker: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: Host is not running
	I0229 20:23:49.699055    6340 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0229 20:23:49.699055    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:51.909229    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:51.909920    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:51.919699    6340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0229 20:23:51.919699    6340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM bridge-863900 ).state
	I0229 20:23:54.234356    6340 main.go:141] libmachine: [stdout =====>] : PausedCritical
	
	I0229 20:23:54.234584    6340 main.go:141] libmachine: [stderr =====>] : 
	I0229 20:23:54.235487    6340 out.go:177] 
	W0229 20:23:54.236182    6340 out.go:239] X Exiting due to K8S_INSTALL_FAILED_CONTAINER_RUNTIME_NOT_RUNNING: Failed to update cluster: updating control plane: generating kubeadm cfg: container runtime is not running
	X Exiting due to K8S_INSTALL_FAILED_CONTAINER_RUNTIME_NOT_RUNNING: Failed to update cluster: updating control plane: generating kubeadm cfg: container runtime is not running
	W0229 20:23:54.236182    6340 out.go:239] * 
	* 
	W0229 20:23:54.238347    6340 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0229 20:23:54.239002    6340 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 93
--- FAIL: TestNetworkPlugins/group/bridge/Start (372.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (10800.524s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-863900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.33s)
panic: test timed out after 3h0m0s
running tests:
	TestStartStop (1h1m11s)
	TestStartStop/group/newest-cni (5m48s)
	TestStartStop/group/newest-cni/serial (5m48s)
	TestStartStop/group/newest-cni/serial/SecondStart (14s)
	TestStartStop/group/no-preload (13m4s)
	TestStartStop/group/no-preload/serial (13m4s)
	TestStartStop/group/no-preload/serial/SecondStart (7m14s)
	TestStartStop/group/old-k8s-version (14m41s)
	TestStartStop/group/old-k8s-version/serial (14m41s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (3m51s)

                                                
                                                
goroutine 3326 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0004009c0, 0xc000817bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0007f2330, {0x47c8f40, 0x2a, 0x2a}, {0x2539683?, 0x4581af?, 0x47eba80?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000b7f860)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000b7f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 11 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000070300)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2598 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2597
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3290 [select]:
os/exec.(*Cmd).watchCtx(0xc0025c4580, 0xc0027e44e0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3287
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 154 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000af5890, 0x3c)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1ff9920?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00064ed80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000af58c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00071dcb0, {0x34814a0, 0xc002217200}, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00071dcb0, 0x3b9aca00, 0x0, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 194
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 86 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 36
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 3046 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00047edc0, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3086
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2059 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc002be24d0, 0x17)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1ff9920?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002060ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002be2500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0029f2010, {0x34814a0, 0xc000814180}, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0029f2010, 0x3b9aca00, 0x0, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2110
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 844 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc002ceacd0, 0x36)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1ff9920?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0021f78c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002cead00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0029f3820, {0x34814a0, 0xc002688000}, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0029f3820, 0x3b9aca00, 0x0, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 880
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 156 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 155
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1201 [chan send, 146 minutes]:
os/exec.(*Cmd).watchCtx(0xc00086c2c0, 0xc002c3cde0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 851
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2596 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000ba3410, 0x14)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1ff9920?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000bf9080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000ba3440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00071c540, {0x34814a0, 0xc000b88000}, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00071c540, 0x3b9aca00, 0x0, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2583
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 155 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x34a3dc0, 0xc00049e000}, 0xc002149f50, 0xc002149f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x34a3dc0, 0xc00049e000}, 0xf8?, 0xc002149f50, 0xc002149f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x34a3dc0?, 0xc00049e000?}, 0xc000446ea0?, 0x4e7ee0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4e8e45?, 0xc000446ea0?, 0xc0004c5000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 194
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2697 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc002714a90, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1ff9920?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0026237a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002714ac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0020d67c0, {0x34814a0, 0xc0023978f0}, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0020d67c0, 0x3b9aca00, 0x0, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2707
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3200 [chan receive]:
testing.(*T).Run(0xc000447860, {0x24ebbfb?, 0x60400000004?}, 0xc000c8a980)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000447860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc000447860, 0xc002718600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1925
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1927 [chan receive, 14 minutes]:
testing.(*T).Run(0xc0006a2ea0, {0x24e04b1?, 0x0?}, 0xc002719500)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0006a2ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0006a2ea0, 0xc000ba22c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1923
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3233 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc000b7bb20?, 0x3b7f45?, 0x4878ec0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0002eea41?, 0xc000b7bb80?, 0x3afe76?, 0x4878ec0?, 0xc000b7bc08?, 0x3a28db?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x334, {0xc00239cad8?, 0x528, 0x4542bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002390008?, {0xc00239cad8?, 0x3dc25e?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002390008, {0xc00239cad8, 0x528, 0x528})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002a10310, {0xc00239cad8?, 0xc000b7bd98?, 0x20c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00284eab0, {0x3480060, 0xc000867c28})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x34801a0, 0xc00284eab0}, {0x3480060, 0xc000867c28}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x34801a0, 0xc00284eab0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x3a0cf6?, {0x34801a0?, 0xc00284eab0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x34801a0, 0xc00284eab0}, {0x3480120, 0xc002a10310}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00205c000?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3232
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2490 [chan receive, 24 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002cea6c0, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2488
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 846 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 845
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2839 [chan receive, 5 minutes]:
testing.(*T).Run(0xc000447380, {0x24ebbfb?, 0x60400000004?}, 0xc002718700)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000447380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc000447380, 0xc002718580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1924
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 177 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00064eea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 165
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 194 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000af58c0, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 165
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3105 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0025e63c0, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3220
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3169 [syscall, 7 minutes, locked to thread]:
syscall.SyscallN(0x7fffe1ae4de0?, {0xc002a79ab0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x4a8, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000868c30)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002730000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc002730000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000446ea0, 0xc002730000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x34a3c00, 0xc0004d03f0}, 0xc000446ea0, {0xc00295a6f0, 0x11}, {0xc0110a55d0?, 0xc002a79f60?}, {0x4e75b3?, 0x438eaf?}, {0xc002213a00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000446ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000446ea0, 0xc002718000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2920
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3289 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc002063b20?, 0x3b7f45?, 0x4878ec0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000b56159?, 0xc002063b80?, 0x3afe76?, 0x4878ec0?, 0xc002063c08?, 0x3a2a45?, 0x1d16d480108?, 0x77?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x528, {0xc002342cbb?, 0x1345, 0x4542bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002328288?, {0xc002342cbb?, 0x3dc211?, 0x4000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002328288, {0xc002342cbb, 0x1345, 0x1345})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002a10278, {0xc002342cbb?, 0xc002063d98?, 0x1fcb?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00240c660, {0x3480060, 0xc0027ac1a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x34801a0, 0xc00240c660}, {0x3480060, 0xc0027ac1a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x34801a0, 0xc00240c660})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x3a0cf6?, {0x34801a0?, 0xc00240c660?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x34801a0, 0xc00240c660}, {0x3480120, 0xc002a10278}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0027e40c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3287
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2319 [chan receive, 28 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002714580, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2339
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2514 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2465
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3287 [syscall, locked to thread]:
syscall.SyscallN(0x7fffe1ae4de0?, {0xc0021a5ab0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x4ac, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002965a10)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0025c4580)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0025c4580)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0024ef040, 0xc0025c4580)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x34a3c00, 0xc0002e81c0}, 0xc0024ef040, {0xc0023fe3d8, 0x11}, {0xc00cb45ea4?, 0xc0021a5f60?}, {0x4e75b3?, 0x438eaf?}, {0xc00213e500, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0024ef040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0024ef040, 0xc000c8a980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3200
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2061 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2060
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2873 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x34a3dc0, 0xc00049e000}, 0xc002423f50, 0xc002423f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x34a3dc0, 0xc00049e000}, 0x90?, 0xc002423f50, 0xc002423f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x34a3dc0?, 0xc00049e000?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002423fd0?, 0x52e684?, 0xc002cea3c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2951
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 713 [IO wait, 163 minutes]:
internal/poll.runtime_pollWait(0x1d17297c060, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x3afe76?, 0x4878ec0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc000205ba0, 0xc00263fbb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc000205b88, 0x2fc, {0xc0002f0780?, 0x0?, 0x0?}, 0xc00051c808?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc000205b88, 0xc00263fd90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc000205b88)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0029c8180)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0029c8180)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc000b740f0, {0x3497850, 0xc0029c8180})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc000b740f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0021849c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 710
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2920 [chan receive, 7 minutes]:
testing.(*T).Run(0xc0004476c0, {0x24ebbfb?, 0x60400000004?}, 0xc002718000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0004476c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0004476c0, 0xc002719500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1927
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 879 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0021f79e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 816
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2698 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x34a3dc0, 0xc00049e000}, 0xc0024c7f50, 0xc0024c7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x34a3dc0, 0xc00049e000}, 0x90?, 0xc0024c7f50, 0xc0024c7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x34a3dc0?, 0xc00049e000?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0024c7fd0?, 0x52e684?, 0xc002d5b630?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2707
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3104 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000c72c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3220
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2582 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000bf9260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2581
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3224 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0025e6310, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1ff9920?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000c72ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0025e63c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00274e950, {0x34814a0, 0xc0024e0150}, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00274e950, 0x3b9aca00, 0x0, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3105
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2707 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002714ac0, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2673
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3226 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3225
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2109 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002060d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2105
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2465 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x34a3dc0, 0xc00049e000}, 0xc000b7df50, 0xc000b7df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x34a3dc0, 0xc00049e000}, 0x90?, 0xc000b7df50, 0xc000b7df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x34a3dc0?, 0xc00049e000?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000b7dfd0?, 0x52e684?, 0xc002397d10?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2490
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1925 [chan receive, 7 minutes]:
testing.(*T).Run(0xc0006a29c0, {0x24e04b1?, 0x0?}, 0xc002718600)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0006a29c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0006a29c0, 0xc000ba2240)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1923
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2597 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x34a3dc0, 0xc00049e000}, 0xc0027ddf50, 0xc0027ddf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x34a3dc0, 0xc00049e000}, 0x90?, 0xc0027ddf50, 0xc0027ddf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x34a3dc0?, 0xc00049e000?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0027ddfd0?, 0x52e684?, 0xc000b56000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2583
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2110 [chan receive, 34 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002be2500, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2105
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1923 [chan receive, 61 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0006a24e0, 0x2f3aea8)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1808
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1924 [chan receive, 16 minutes]:
testing.(*T).Run(0xc0006a2680, {0x24e04b1?, 0x0?}, 0xc002718580)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0006a2680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0006a2680, 0xc000ba2140)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1923
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 845 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x34a3dc0, 0xc00049e000}, 0xc00222bf50, 0xc00222bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x34a3dc0, 0xc00049e000}, 0xc0?, 0xc00222bf50, 0xc00222bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x34a3dc0?, 0xc00049e000?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x52e625?, 0xc00086c2c0?, 0xc00049efc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 880
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 880 [chan receive, 153 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002cead00, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 816
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2060 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x34a3dc0, 0xc00049e000}, 0xc0021a9f50, 0xc0021a9f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x34a3dc0, 0xc00049e000}, 0xa0?, 0xc0021a9f50, 0xc0021a9f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x34a3dc0?, 0xc00049e000?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x52e625?, 0xc00251c420?, 0xc0000557a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2110
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3267 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000b35600, 0xc002cf8900)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3232
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2343 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc002714550, 0x16)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1ff9920?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002622d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002714580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002a26390, {0x34814a0, 0xc0026d8360}, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002a26390, 0x3b9aca00, 0x0, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2319
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2874 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2873
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3266 [syscall, locked to thread]:
syscall.SyscallN(0xc00220fb10?, {0xc00220fb20?, 0x3b7f45?, 0x47f8940?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x100000000000000?, 0xc00220fb80?, 0x3afe76?, 0x4878ec0?, 0xc00220fc08?, 0x3a2a45?, 0x3333313466356431?, 0x20000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x4bc, {0xc0026f3ee1?, 0xa11f, 0x4542bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002390508?, {0xc0026f3ee1?, 0x0?, 0x20000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002390508, {0xc0026f3ee1, 0xa11f, 0xa11f})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002a10328, {0xc0026f3ee1?, 0x2392?, 0x10000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00284eae0, {0x3480060, 0xc0000a6fe0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x34801a0, 0xc00284eae0}, {0x3480060, 0xc0000a6fe0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x34801a0, 0xc00284eae0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x3a0cf6?, {0x34801a0?, 0xc00284eae0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x34801a0, 0xc00284eae0}, {0x3480120, 0xc002a10328}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0002eed80?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3232
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2872 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00047ef10, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1ff9920?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0029c5020)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00047ef40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0028ce120, {0x34814a0, 0xc000815ef0}, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0028ce120, 0x3b9aca00, 0x0, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2951
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3045 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0021f69c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3086
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3186 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x1e625e5?, {0xc002609b20?, 0x3b7f45?, 0x4878ec0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc00048e808?, 0xc002609b80?, 0x3afe76?, 0x4878ec0?, 0xc002609c08?, 0x3a28db?, 0x398c66?, 0xc000067635?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x614, {0xc00218129d?, 0x563, 0x4542bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0020c2508?, {0xc00218129d?, 0x3dc211?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0020c2508, {0xc00218129d, 0x563, 0x563})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002a10060, {0xc00218129d?, 0xc002609d98?, 0x226?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0029621e0, {0x3480060, 0xc00060e3a8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x34801a0, 0xc0029621e0}, {0x3480060, 0xc00060e3a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x34801a0, 0xc0029621e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x3a0cf6?, {0x34801a0?, 0xc0029621e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x34801a0, 0xc0029621e0}, {0x3480120, 0xc002a10060}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002719d00?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3169
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1032 [chan send, 151 minutes]:
os/exec.(*Cmd).watchCtx(0xc0028e18c0, 0xc0027e5d40)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1031
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2318 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002622ea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2339
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2706 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0026238c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2673
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2345 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2344
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2699 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2698
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3188 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc002730000, 0xc002660660)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3169
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1808 [chan receive, 61 minutes]:
testing.(*T).Run(0xc002185040, {0x24defa8?, 0x4e75b3?}, 0x2f3aea8)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc002185040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc002185040, 0x2f3acd0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2344 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x34a3dc0, 0xc00049e000}, 0xc0025dff50, 0xc0025dff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x34a3dc0, 0xc00049e000}, 0xee?, 0xc0025dff50, 0xc0025dff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x34a3dc0?, 0xc00049e000?}, 0xc0006a3ba0?, 0x4e7ee0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4e8e45?, 0xc0006a3ba0?, 0xc000ba2640?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2319
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2464 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc002cea690, 0x14)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1ff9920?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000c72e40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002cea6c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002128ab0, {0x34814a0, 0xc00270c300}, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002128ab0, 0x3b9aca00, 0x0, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2490
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2489 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000c72f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2488
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2583 [chan receive, 22 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000ba3440, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2581
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3187 [syscall, locked to thread]:
syscall.SyscallN(0xc002ae2380?, {0xc00219db20?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x0?, 0x0?, 0xc0000aa1e0?, 0xc000c0a840?, 0xc0000aa200?, 0xc003814528?, 0xc000614900?, 0xc000128b20?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7a0, {0xc0022fac75?, 0x338b, 0x4542bf?}, 0xc002abc520?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0020c2c88?, {0xc0022fac75?, 0x3dc25e?, 0x20000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0020c2c88, {0xc0022fac75, 0x338b, 0x338b})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002a10080, {0xc0022fac75?, 0x51fa?, 0xfe84?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002962210, {0x3480060, 0xc0000a6a40})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x34801a0, 0xc002962210}, {0x3480060, 0xc0000a6a40}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00219de78?, {0x34801a0, 0xc002962210})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00219df38?, {0x34801a0?, 0xc002962210?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x34801a0, 0xc002962210}, {0x3480120, 0xc002a10080}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002cf8300?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3169
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2951 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00047ef40, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2949
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3225 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x34a3dc0, 0xc00049e000}, 0xc002549f50, 0xc002549f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x34a3dc0, 0xc00049e000}, 0xa0?, 0xc002549f50, 0xc002549f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x34a3dc0?, 0xc00049e000?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002549fd0?, 0x52e684?, 0xc0002eed80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3105
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2950 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0029c5140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2949
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3288 [syscall, locked to thread]:
syscall.SyscallN(0x6f002e0067006e?, {0xc0021dfb20?, 0x3b7f45?, 0x4878ec0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x6d007500730035?, 0xc0021dfb80?, 0x3afe76?, 0x4878ec0?, 0xc0021dfc08?, 0x3a28db?, 0x1d16d480598?, 0x35?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x3a4, {0xc00255aa26?, 0x5da, 0xc00255a800?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002391b88?, {0xc00255aa26?, 0x3dc211?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002391b88, {0xc00255aa26, 0x5da, 0x5da})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002a10218, {0xc00255aa26?, 0xc00264f500?, 0x226?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00240c630, {0x3480060, 0xc000867d68})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x34801a0, 0xc00240c630}, {0x3480060, 0xc000867d68}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0021dfe78?, {0x34801a0, 0xc00240c630})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0021dff38?, {0x34801a0?, 0xc00240c630?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x34801a0, 0xc00240c630}, {0x3480120, 0xc002a10218}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0026600c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3287
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 3232 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7fffe1ae4de0?, {0xc002295ab0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x738, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0024f66c0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000b35600)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000b35600)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002184d00, 0xc000b35600)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x34a3c00, 0xc0002e8230}, 0xc002184d00, {0xc0025ea9d8, 0x16}, {0xc0187baf58?, 0xc002295f60?}, {0x4e75b3?, 0x438eaf?}, {0xc000c06300, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002184d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002184d00, 0xc002718700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2839
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3090 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00047ed90, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x1ff9920?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0021f68a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00047edc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0020d7aa0, {0x34814a0, 0xc002688240}, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0020d7aa0, 0x3b9aca00, 0x0, 0x1, 0xc00049e000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3046
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3091 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x34a3dc0, 0xc00049e000}, 0xc002067f50, 0xc002067f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x34a3dc0, 0xc00049e000}, 0x65?, 0xc002067f50, 0xc002067f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x34a3dc0?, 0xc00049e000?}, 0x342e36322e323731?, 0x3434383a38352e38?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x3220393220626546?, 0x2036303a35323a30?, 0x2d74656e6562756b?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3046
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.2/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3092 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3091
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                    

Test pass (194/247)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.89
4 TestDownloadOnly/v1.16.0/preload-exists 0.15
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.22
9 TestDownloadOnly/v1.16.0/DeleteAll 1.06
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 1.04
12 TestDownloadOnly/v1.28.4/json-events 10.91
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.22
18 TestDownloadOnly/v1.28.4/DeleteAll 1.28
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 1.07
21 TestDownloadOnly/v1.29.0-rc.2/json-events 10.99
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.21
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 1.06
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 1.09
30 TestBinaryMirror 6.62
31 TestOffline 273.07
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.25
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.24
36 TestAddons/Setup 368.58
39 TestAddons/parallel/Ingress 61.41
40 TestAddons/parallel/InspektorGadget 24.87
41 TestAddons/parallel/MetricsServer 19.91
42 TestAddons/parallel/HelmTiller 28.97
44 TestAddons/parallel/CSI 100.32
45 TestAddons/parallel/Headlamp 33.11
46 TestAddons/parallel/CloudSpanner 19.51
47 TestAddons/parallel/LocalPath 81.95
48 TestAddons/parallel/NvidiaDevicePlugin 20.3
49 TestAddons/parallel/Yakd 6.03
52 TestAddons/serial/GCPAuth/Namespaces 0.3
53 TestAddons/StoppedEnableDisable 47.18
54 TestCertOptions 310.32
55 TestCertExpiration 818.4
56 TestDockerFlags 358.35
57 TestForceSystemdFlag 464.18
58 TestForceSystemdEnv 590.25
65 TestErrorSpam/start 15.88
66 TestErrorSpam/status 33.96
67 TestErrorSpam/pause 21.16
68 TestErrorSpam/unpause 21.11
69 TestErrorSpam/stop 49.59
72 TestFunctional/serial/CopySyncFile 0.02
73 TestFunctional/serial/StartWithProxy 222.99
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 108.47
76 TestFunctional/serial/KubeContext 0.12
77 TestFunctional/serial/KubectlGetPods 0.19
80 TestFunctional/serial/CacheCmd/cache/add_remote 24.27
81 TestFunctional/serial/CacheCmd/cache/add_local 9.5
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.22
83 TestFunctional/serial/CacheCmd/cache/list 0.22
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.63
85 TestFunctional/serial/CacheCmd/cache/cache_reload 32.96
86 TestFunctional/serial/CacheCmd/cache/delete 0.43
87 TestFunctional/serial/MinikubeKubectlCmd 0.39
89 TestFunctional/serial/ExtraConfig 112.62
90 TestFunctional/serial/ComponentHealth 0.17
91 TestFunctional/serial/LogsCmd 7.82
92 TestFunctional/serial/LogsFileCmd 9.63
93 TestFunctional/serial/InvalidService 20.18
99 TestFunctional/parallel/StatusCmd 38.49
103 TestFunctional/parallel/ServiceCmdConnect 24.97
104 TestFunctional/parallel/AddonsCmd 0.7
105 TestFunctional/parallel/PersistentVolumeClaim 37
107 TestFunctional/parallel/SSHCmd 21.51
108 TestFunctional/parallel/CpCmd 55.09
109 TestFunctional/parallel/MySQL 54.46
110 TestFunctional/parallel/FileSync 9.88
111 TestFunctional/parallel/CertSync 56.75
115 TestFunctional/parallel/NodeLabels 0.19
117 TestFunctional/parallel/NonActiveRuntimeDisabled 10.72
119 TestFunctional/parallel/License 2.42
120 TestFunctional/parallel/ServiceCmd/DeployApp 18.43
121 TestFunctional/parallel/Version/short 0.21
122 TestFunctional/parallel/Version/components 7.36
123 TestFunctional/parallel/ImageCommands/ImageListShort 6.94
124 TestFunctional/parallel/ImageCommands/ImageListTable 6.91
125 TestFunctional/parallel/ImageCommands/ImageListJson 6.85
126 TestFunctional/parallel/ImageCommands/ImageListYaml 6.87
127 TestFunctional/parallel/ImageCommands/ImageBuild 24.25
128 TestFunctional/parallel/ImageCommands/Setup 4.29
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 21.91
130 TestFunctional/parallel/ServiceCmd/List 12.7
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 18.88
132 TestFunctional/parallel/ServiceCmd/JSONOutput 11.9
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 27.74
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 8.38
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.69
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.59
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/ProfileCmd/profile_not_create 8.2
150 TestFunctional/parallel/ImageCommands/ImageRemove 15.5
151 TestFunctional/parallel/ProfileCmd/profile_list 8.27
152 TestFunctional/parallel/ProfileCmd/profile_json_output 7.94
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 18.68
154 TestFunctional/parallel/DockerEnv/powershell 40.53
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9.01
156 TestFunctional/parallel/UpdateContextCmd/no_changes 2.33
157 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.31
158 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.3
159 TestFunctional/delete_addon-resizer_images 0.43
160 TestFunctional/delete_my-image_image 0.17
161 TestFunctional/delete_minikube_cached_images 0.16
165 TestImageBuild/serial/Setup 176.14
166 TestImageBuild/serial/NormalBuild 8.36
167 TestImageBuild/serial/BuildWithBuildArg 7.53
168 TestImageBuild/serial/BuildWithDockerIgnore 6.87
169 TestImageBuild/serial/BuildWithSpecifiedDockerfile 6.77
178 TestJSONOutput/start/Command 219.54
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 7.12
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 7.13
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 28.76
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 1.32
206 TestMainNoArgs 0.22
207 TestMinikubeProfile 459.03
210 TestMountStart/serial/StartWithMountFirst 139.83
211 TestMountStart/serial/VerifyMountFirst 8.8
215 TestMultiNode/serial/FreshStart2Nodes 386.03
216 TestMultiNode/serial/DeployApp2Nodes 8.28
218 TestMultiNode/serial/AddNode 203.56
219 TestMultiNode/serial/MultiNodeLabels 0.15
220 TestMultiNode/serial/ProfileList 6.92
221 TestMultiNode/serial/CopyFile 328.28
222 TestMultiNode/serial/StopNode 65
223 TestMultiNode/serial/StartAfterStop 150.3
225 TestMultiNode/serial/DeleteNode 57.55
230 TestPreload 438.57
231 TestScheduledStopWindows 308.68
236 TestRunningBinaryUpgrade 793.01
258 TestStoppedBinaryUpgrade/Setup 1.03
259 TestStoppedBinaryUpgrade/Upgrade 861.46
260 TestStoppedBinaryUpgrade/MinikubeLogs 9.3
262 TestPause/serial/Start 196.94
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.22
266 TestPause/serial/SecondStartNoReconfiguration 334.49
267 TestNetworkPlugins/group/auto/Start 423.45
270 TestPause/serial/Pause 7.68
271 TestPause/serial/VerifyStatus 11.66
272 TestPause/serial/Unpause 7.33
273 TestPause/serial/PauseAgain 7.45
274 TestPause/serial/DeletePaused 42.46
275 TestPause/serial/VerifyDeletedResources 10.5
276 TestNetworkPlugins/group/custom-flannel/Start 446.29
277 TestNetworkPlugins/group/auto/KubeletFlags 8.98
278 TestNetworkPlugins/group/auto/NetCatPod 16.42
279 TestNetworkPlugins/group/auto/DNS 0.28
280 TestNetworkPlugins/group/auto/Localhost 0.28
281 TestNetworkPlugins/group/auto/HairPin 0.27
282 TestNetworkPlugins/group/custom-flannel/KubeletFlags 10.02
283 TestNetworkPlugins/group/custom-flannel/NetCatPod 17.5
284 TestNetworkPlugins/group/custom-flannel/DNS 0.31
285 TestNetworkPlugins/group/custom-flannel/Localhost 0.32
286 TestNetworkPlugins/group/custom-flannel/HairPin 0.31
287 TestNetworkPlugins/group/false/Start 244.43
288 TestNetworkPlugins/group/enable-default-cni/Start 219.07
289 TestNetworkPlugins/group/false/KubeletFlags 10.14
290 TestNetworkPlugins/group/false/NetCatPod 16.55
291 TestNetworkPlugins/group/flannel/Start 236.43
292 TestNetworkPlugins/group/false/DNS 0.37
293 TestNetworkPlugins/group/false/Localhost 0.29
294 TestNetworkPlugins/group/false/HairPin 0.27
295 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 10.46
296 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.62
297 TestNetworkPlugins/group/enable-default-cni/DNS 0.34
298 TestNetworkPlugins/group/enable-default-cni/Localhost 0.26
299 TestNetworkPlugins/group/enable-default-cni/HairPin 0.3
301 TestNetworkPlugins/group/flannel/ControllerPod 6.02
302 TestNetworkPlugins/group/flannel/KubeletFlags 10.49
303 TestNetworkPlugins/group/flannel/NetCatPod 15.51
304 TestNetworkPlugins/group/flannel/DNS 0.33
305 TestNetworkPlugins/group/flannel/Localhost 0.33
306 TestNetworkPlugins/group/flannel/HairPin 0.31
307 TestNetworkPlugins/group/kubenet/Start 243.38
312 TestNetworkPlugins/group/kubenet/KubeletFlags 8.78
313 TestNetworkPlugins/group/kubenet/NetCatPod 22.43
314 TestNetworkPlugins/group/kubenet/DNS 0.3
315 TestNetworkPlugins/group/kubenet/Localhost 0.3
x
+
TestDownloadOnly/v1.16.0/json-events (14.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-201400 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-201400 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv: (14.8898842s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-201400
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-201400: exit status 85 (214.1958ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-201400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |          |
	|         | -p download-only-201400        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:38:28
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:38:28.894125   10780 out.go:291] Setting OutFile to fd 592 ...
	I0229 17:38:28.894596   10780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:38:28.894596   10780 out.go:304] Setting ErrFile to fd 588...
	I0229 17:38:28.894681   10780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0229 17:38:28.907573   10780 root.go:314] Error reading config file at C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0229 17:38:28.916575   10780 out.go:298] Setting JSON to true
	I0229 17:38:28.920824   10780 start.go:129] hostinfo: {"hostname":"minikube5","uptime":50046,"bootTime":1709178262,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 17:38:28.920824   10780 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 17:38:28.923019   10780 out.go:97] [download-only-201400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 17:38:28.923254   10780 notify.go:220] Checking for updates...
	I0229 17:38:28.924128   10780 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	W0229 17:38:28.923254   10780 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0229 17:38:28.925084   10780 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 17:38:28.925617   10780 out.go:169] MINIKUBE_LOCATION=18259
	I0229 17:38:28.925793   10780 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0229 17:38:28.926794   10780 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 17:38:28.928139   10780 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:38:34.161090   10780 out.go:97] Using the hyperv driver based on user configuration
	I0229 17:38:34.161193   10780 start.go:299] selected driver: hyperv
	I0229 17:38:34.161193   10780 start.go:903] validating driver "hyperv" against <nil>
	I0229 17:38:34.161454   10780 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:38:34.206130   10780 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0229 17:38:34.207515   10780 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 17:38:34.207515   10780 cni.go:84] Creating CNI manager for ""
	I0229 17:38:34.207515   10780 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0229 17:38:34.208056   10780 start_flags.go:323] config:
	{Name:download-only-201400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-201400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:38:34.208878   10780 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:38:34.210991   10780 out.go:97] Downloading VM boot image ...
	I0229 17:38:34.211078   10780 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18020/minikube-v1.32.1-1708638130-18020-amd64.iso.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.32.1-1708638130-18020-amd64.iso
	I0229 17:38:38.240834   10780 out.go:97] Starting control plane node download-only-201400 in cluster download-only-201400
	I0229 17:38:38.240931   10780 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 17:38:38.278550   10780 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0229 17:38:38.279325   10780 cache.go:56] Caching tarball of preloaded images
	I0229 17:38:38.279742   10780 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0229 17:38:38.280536   10780 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0229 17:38:38.280625   10780 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:38:38.346226   10780 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-201400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:38:43.893841    2400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (1.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.0586703s)
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (1.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (1.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-201400
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-201400: (1.0357643s)
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (1.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (10.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-993600 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-993600 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv: (10.9123041s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (10.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-993600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-993600: exit status 85 (223.1286ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-201400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-201400        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| delete  | -p download-only-201400        | download-only-201400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| start   | -o=json --download-only        | download-only-993600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-993600        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:38:46
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:38:46.255531    8392 out.go:291] Setting OutFile to fd 672 ...
	I0229 17:38:46.256032    8392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:38:46.256032    8392 out.go:304] Setting ErrFile to fd 744...
	I0229 17:38:46.256032    8392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:38:46.277093    8392 out.go:298] Setting JSON to true
	I0229 17:38:46.280128    8392 start.go:129] hostinfo: {"hostname":"minikube5","uptime":50063,"bootTime":1709178262,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 17:38:46.280221    8392 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 17:38:46.281302    8392 out.go:97] [download-only-993600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 17:38:46.281406    8392 notify.go:220] Checking for updates...
	I0229 17:38:46.281406    8392 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 17:38:46.282135    8392 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 17:38:46.282680    8392 out.go:169] MINIKUBE_LOCATION=18259
	I0229 17:38:46.282952    8392 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0229 17:38:46.283717    8392 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 17:38:46.284652    8392 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:38:51.384246    8392 out.go:97] Using the hyperv driver based on user configuration
	I0229 17:38:51.384246    8392 start.go:299] selected driver: hyperv
	I0229 17:38:51.384246    8392 start.go:903] validating driver "hyperv" against <nil>
	I0229 17:38:51.384776    8392 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:38:51.425301    8392 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0229 17:38:51.426498    8392 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 17:38:51.426498    8392 cni.go:84] Creating CNI manager for ""
	I0229 17:38:51.426498    8392 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 17:38:51.426498    8392 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:38:51.426498    8392 start_flags.go:323] config:
	{Name:download-only-993600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-993600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:38:51.427227    8392 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:38:51.427813    8392 out.go:97] Starting control plane node download-only-993600 in cluster download-only-993600
	I0229 17:38:51.427813    8392 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 17:38:51.468425    8392 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 17:38:51.468425    8392 cache.go:56] Caching tarball of preloaded images
	I0229 17:38:51.468905    8392 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 17:38:51.469569    8392 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0229 17:38:51.469676    8392 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:38:51.539301    8392 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0229 17:38:55.006206    8392 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:38:55.007089    8392 preload.go:256] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:38:56.070701    8392 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0229 17:38:56.070701    8392 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-993600\config.json ...
	I0229 17:38:56.071702    8392 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-993600\config.json: {Name:mk62312c50a57e3bd6315cca565462338fce7daf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:38:56.072668    8392 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0229 17:38:56.073489    8392 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\windows\amd64\v1.28.4/kubectl.exe
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-993600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:38:57.127742   13516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (1.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2840109s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (1.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-993600
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-993600: (1.0693901s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (10.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-119000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-119000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv: (10.9869909s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (10.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-119000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-119000: exit status 85 (206.4075ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-201400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-201400           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| delete  | -p download-only-201400           | download-only-201400 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| start   | -o=json --download-only           | download-only-993600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-993600           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| delete  | -p download-only-993600           | download-only-993600 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC | 29 Feb 24 17:38 UTC |
	| start   | -o=json --download-only           | download-only-119000 | minikube5\jenkins | v1.32.0 | 29 Feb 24 17:38 UTC |                     |
	|         | -p download-only-119000           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/29 17:38:59
	Running on machine: minikube5
	Binary: Built with gc go1.22.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0229 17:38:59.750616    5436 out.go:291] Setting OutFile to fd 684 ...
	I0229 17:38:59.751193    5436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:38:59.751193    5436 out.go:304] Setting ErrFile to fd 672...
	I0229 17:38:59.751193    5436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 17:38:59.769862    5436 out.go:298] Setting JSON to true
	I0229 17:38:59.773320    5436 start.go:129] hostinfo: {"hostname":"minikube5","uptime":50077,"bootTime":1709178262,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 17:38:59.773320    5436 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 17:38:59.774628    5436 out.go:97] [download-only-119000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 17:38:59.775355    5436 notify.go:220] Checking for updates...
	I0229 17:38:59.776038    5436 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 17:38:59.776650    5436 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 17:38:59.777343    5436 out.go:169] MINIKUBE_LOCATION=18259
	I0229 17:38:59.777873    5436 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0229 17:38:59.778744    5436 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0229 17:38:59.779961    5436 driver.go:392] Setting default libvirt URI to qemu:///system
	I0229 17:39:04.997247    5436 out.go:97] Using the hyperv driver based on user configuration
	I0229 17:39:04.997369    5436 start.go:299] selected driver: hyperv
	I0229 17:39:04.997369    5436 start.go:903] validating driver "hyperv" against <nil>
	I0229 17:39:04.997443    5436 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0229 17:39:05.041828    5436 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0229 17:39:05.042966    5436 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0229 17:39:05.042966    5436 cni.go:84] Creating CNI manager for ""
	I0229 17:39:05.042966    5436 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0229 17:39:05.042966    5436 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0229 17:39:05.042966    5436 start_flags.go:323] config:
	{Name:download-only-119000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-119000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0229 17:39:05.043681    5436 iso.go:125] acquiring lock: {Name:mk91f2ee29fbed5605669750e8cfa308a1229357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0229 17:39:05.044490    5436 out.go:97] Starting control plane node download-only-119000 in cluster download-only-119000
	I0229 17:39:05.044490    5436 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 17:39:05.090121    5436 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 17:39:05.090121    5436 cache.go:56] Caching tarball of preloaded images
	I0229 17:39:05.090756    5436 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 17:39:05.091539    5436 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0229 17:39:05.091613    5436 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:39:05.153279    5436 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0229 17:39:08.628652    5436 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:39:08.629459    5436 preload.go:256] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0229 17:39:09.529960    5436 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0229 17:39:09.530283    5436 profile.go:148] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-119000\config.json ...
	I0229 17:39:09.530283    5436 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-119000\config.json: {Name:mkd26b3935f96d059f0cca855242e0dd95464785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0229 17:39:09.531435    5436 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0229 17:39:09.532923    5436 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\windows\amd64\v1.29.0-rc.2/kubectl.exe
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-119000"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:39:10.687189    5048 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.0607192s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-119000
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-119000: (1.0928551s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.09s)

                                                
                                    
x
+
TestBinaryMirror (6.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-229100 --alsologtostderr --binary-mirror http://127.0.0.1:51220 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-229100 --alsologtostderr --binary-mirror http://127.0.0.1:51220 --driver=hyperv: (5.8277937s)
helpers_test.go:175: Cleaning up "binary-mirror-229100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-229100
--- PASS: TestBinaryMirror (6.62s)

                                                
                                    
x
+
TestOffline (273.07s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-863600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-863600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m43.9207357s)
helpers_test.go:175: Cleaning up "offline-docker-863600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-863600
E0229 19:35:23.324440    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 19:35:31.998856    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-863600: (49.1444929s)
--- PASS: TestOffline (273.07s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.25s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-268800
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-268800: exit status 85 (251.3544ms)

                                                
                                                
-- stdout --
	* Profile "addons-268800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-268800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:39:22.881636    4896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.25s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.24s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-268800
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-268800: exit status 85 (240.8603ms)

                                                
                                                
-- stdout --
	* Profile "addons-268800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-268800"

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 17:39:22.881636    8000 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.24s)

                                                
                                    
x
+
TestAddons/Setup (368.58s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-268800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-268800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m8.5802306s)
--- PASS: TestAddons/Setup (368.58s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (61.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-268800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-268800 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-268800 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d8309099-5a5a-4d52-9dd6-fb7c80198b85] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d8309099-5a5a-4d52-9dd6-fb7c80198b85] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.012041s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-268800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-268800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.5800545s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-268800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0229 17:47:06.727526   13844 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-268800 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-268800 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-268800 ip: (2.3122423s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.26.58.180
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-268800 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-268800 addons disable ingress-dns --alsologtostderr -v=1: (14.4412431s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-268800 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-268800 addons disable ingress --alsologtostderr -v=1: (20.8073865s)
--- PASS: TestAddons/parallel/Ingress (61.41s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (24.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dfbhw" [99b39921-fba5-4efa-9b5d-599d2726406f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0196067s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-268800
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-268800: (19.8511096s)
--- PASS: TestAddons/parallel/InspektorGadget (24.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (19.91s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 11.6387ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-tnkzm" [f081da8b-c4f4-4f27-a4d1-de33a4a6dd10] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0205248s
addons_test.go:415: (dbg) Run:  kubectl --context addons-268800 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-268800 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-268800 addons disable metrics-server --alsologtostderr -v=1: (13.9255219s)
--- PASS: TestAddons/parallel/MetricsServer (19.91s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (28.97s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 6.0171ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-n7k8w" [accd94dd-d92d-4fb3-b5f2-99d6bfea5044] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.015802s
addons_test.go:473: (dbg) Run:  kubectl --context addons-268800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-268800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.1310572s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-268800 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-268800 addons disable helm-tiller --alsologtostderr -v=1: (14.8031226s)
--- PASS: TestAddons/parallel/HelmTiller (28.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (100.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 61.1111ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-268800 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-268800 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d417867a-c0a3-420c-8830-040f24638a9e] Pending
helpers_test.go:344: "task-pv-pod" [d417867a-c0a3-420c-8830-040f24638a9e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d417867a-c0a3-420c-8830-040f24638a9e] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 21.0364526s
addons_test.go:584: (dbg) Run:  kubectl --context addons-268800 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-268800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-268800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-268800 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-268800 delete pod task-pv-pod: (1.2366584s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-268800 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-268800 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-268800 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3feac6ee-b702-427a-9860-fc84fd9f49e4] Pending
helpers_test.go:344: "task-pv-pod-restore" [3feac6ee-b702-427a-9860-fc84fd9f49e4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3feac6ee-b702-427a-9860-fc84fd9f49e4] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0087729s
addons_test.go:626: (dbg) Run:  kubectl --context addons-268800 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-268800 delete pod task-pv-pod-restore: (1.8470446s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-268800 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-268800 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-268800 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-268800 addons disable csi-hostpath-driver --alsologtostderr -v=1: (20.0843313s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-268800 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-268800 addons disable volumesnapshots --alsologtostderr -v=1: (14.8827993s)
--- PASS: TestAddons/parallel/CSI (100.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (33.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-268800 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-268800 --alsologtostderr -v=1: (15.0848586s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-t4hjv" [9df4c63a-6d6a-4fda-bfdf-5594fbce47dc] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-t4hjv" [9df4c63a-6d6a-4fda-bfdf-5594fbce47dc] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.0213246s
--- PASS: TestAddons/parallel/Headlamp (33.11s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (19.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-nnxff" [e4152ace-bd12-4208-b9ec-8e44ad53e5db] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0198767s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-268800
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-268800: (14.4650447s)
--- PASS: TestAddons/parallel/CloudSpanner (19.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (81.95s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-268800 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-268800 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268800 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0e2fb79b-010d-441b-bf5c-cec63d9902ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0e2fb79b-010d-441b-bf5c-cec63d9902ea] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0e2fb79b-010d-441b-bf5c-cec63d9902ea] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0230396s
addons_test.go:891: (dbg) Run:  kubectl --context addons-268800 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-268800 ssh "cat /opt/local-path-provisioner/pvc-917e0246-92bc-479c-8100-dad11aec0009_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-268800 ssh "cat /opt/local-path-provisioner/pvc-917e0246-92bc-479c-8100-dad11aec0009_default_test-pvc/file1": (9.7267625s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-268800 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-268800 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-268800 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-268800 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (59.6512425s)
--- PASS: TestAddons/parallel/LocalPath (81.95s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.3s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-h6jfq" [6c4b400a-48aa-404c-9a94-dc86cfedf0a5] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.013947s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-268800
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-268800: (15.2745258s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.30s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-ntpzm" [a45c4336-659c-4283-b77c-6ee5a05c6263] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0194938s
--- PASS: TestAddons/parallel/Yakd (6.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-268800 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-268800 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.30s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (47.18s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-268800
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-268800: (36.2292641s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-268800
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-268800: (4.3893537s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-268800
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-268800: (4.2561999s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-268800
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-268800: (2.3005499s)
--- PASS: TestAddons/StoppedEnableDisable (47.18s)

                                                
                                    
x
+
TestCertOptions (310.32s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-844800 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-844800 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (4m9.4168158s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-844800 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-844800 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (9.1824015s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-844800 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-844800 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-844800 -- "sudo cat /etc/kubernetes/admin.conf": (9.0723689s)
helpers_test.go:175: Cleaning up "cert-options-844800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-844800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-844800: (42.5140351s)
--- PASS: TestCertOptions (310.32s)

                                                
                                    
x
+
TestCertExpiration (818.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-587600 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-587600 --memory=2048 --cert-expiration=3m --driver=hyperv: (6m35.6860632s)
E0229 19:50:23.371711    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 19:50:32.047008    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-587600 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-587600 --memory=2048 --cert-expiration=8760h --driver=hyperv: (3m24.4428758s)
helpers_test.go:175: Cleaning up "cert-expiration-587600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-587600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-587600: (38.2597156s)
--- PASS: TestCertExpiration (818.40s)

                                                
                                    
x
+
TestDockerFlags (358.35s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-012500 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-012500 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (4m58.9410172s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-012500 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-012500 ssh "sudo systemctl show docker --property=Environment --no-pager": (8.8743998s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-012500 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-012500 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (8.9606147s)
helpers_test.go:175: Cleaning up "docker-flags-012500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-012500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-012500: (41.5726814s)
--- PASS: TestDockerFlags (358.35s)

                                                
                                    
x
+
TestForceSystemdFlag (464.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-584100 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-584100 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (6m53.2738208s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-584100 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-584100 ssh "docker info --format {{.CgroupDriver}}": (8.7363385s)
helpers_test.go:175: Cleaning up "force-systemd-flag-584100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-584100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-584100: (42.1525734s)
--- PASS: TestForceSystemdFlag (464.18s)

                                                
                                    
x
+
TestForceSystemdEnv (590.25s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-090200 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-090200 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (8m57.9943942s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-090200 ssh "docker info --format {{.CgroupDriver}}"
E0229 19:40:32.024943    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-090200 ssh "docker info --format {{.CgroupDriver}}": (9.2722566s)
helpers_test.go:175: Cleaning up "force-systemd-env-090200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-090200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-090200: (42.9811325s)
--- PASS: TestForceSystemdEnv (590.25s)

                                                
                                    
x
+
TestErrorSpam/start (15.88s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 start --dry-run: (5.3122407s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 start --dry-run: (5.2819502s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 start --dry-run: (5.2650253s)
--- PASS: TestErrorSpam/start (15.88s)

                                                
                                    
x
+
TestErrorSpam/status (33.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 status
E0229 17:53:15.631992    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 status: (11.7084398s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 status: (11.0570502s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 status: (11.1723898s)
--- PASS: TestErrorSpam/status (33.96s)

                                                
                                    
x
+
TestErrorSpam/pause (21.16s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 pause: (7.187556s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 pause: (6.9428328s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 pause: (7.0022941s)
--- PASS: TestErrorSpam/pause (21.16s)

                                                
                                    
x
+
TestErrorSpam/unpause (21.11s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 unpause: (7.2034968s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 unpause: (6.9188956s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 unpause: (6.9609324s)
--- PASS: TestErrorSpam/unpause (21.11s)

                                                
                                    
x
+
TestErrorSpam/stop (49.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 stop: (33.279153s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 stop: (8.2294573s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-954700 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-954700 stop: (8.0746505s)
--- PASS: TestErrorSpam/stop (49.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\4356\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.02s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (222.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-070600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0229 17:55:59.493309    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-070600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m42.9739175s)
--- PASS: TestFunctional/serial/StartWithProxy (222.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (108.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-070600 --alsologtostderr -v=8
E0229 18:00:31.677770    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-070600 --alsologtostderr -v=8: (1m48.4648714s)
functional_test.go:659: soft start took 1m48.4663965s for "functional-070600" cluster.
--- PASS: TestFunctional/serial/SoftStart (108.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.12s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-070600 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (24.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 cache add registry.k8s.io/pause:3.1: (8.2104037s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 cache add registry.k8s.io/pause:3.3: (8.1236463s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 cache add registry.k8s.io/pause:latest: (7.936569s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (24.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-070600 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1956032108\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-070600 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1956032108\001: (1.6389142s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 cache add minikube-local-cache-test:functional-070600
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 cache add minikube-local-cache-test:functional-070600: (7.4623377s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 cache delete minikube-local-cache-test:functional-070600
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-070600
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh sudo crictl images: (8.632868s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (32.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh sudo docker rmi registry.k8s.io/pause:latest: (8.5535592s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-070600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.517177s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:01:56.437895    3412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 cache reload: (7.3090513s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (8.5463835s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (32.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.43s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 kubectl -- --context functional-070600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.39s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (112.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-070600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-070600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m52.6069837s)
functional_test.go:757: restart took 1m52.615968s for "functional-070600" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (112.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-070600 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (7.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 logs: (7.8175873s)
--- PASS: TestFunctional/serial/LogsCmd (7.82s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (9.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2101831181\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2101831181\001\logs.txt: (9.6123929s)
--- PASS: TestFunctional/serial/LogsFileCmd (9.63s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (20.18s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-070600 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-070600
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-070600: exit status 115 (15.3224138s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.26.52.106:30865 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:05:05.839707   10996 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_service_d27a1c5599baa2f8050d003f41b0266333639286_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-070600 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-070600 delete -f testdata\invalidsvc.yaml: (1.5037283s)
--- PASS: TestFunctional/serial/InvalidService (20.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (38.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 status: (12.4352341s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (12.8150332s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 status -o json: (13.2336584s)
--- PASS: TestFunctional/parallel/StatusCmd (38.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (24.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-070600 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-070600 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-rzfjz" [50384a5c-2e54-4eac-9dd7-8d8cf6dc1f71] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-rzfjz" [50384a5c-2e54-4eac-9dd7-8d8cf6dc1f71] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0148066s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 service hello-node-connect --url: (16.575866s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.26.52.106:31724
functional_test.go:1671: http://172.26.52.106:31724: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-rzfjz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.26.52.106:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.26.52.106:31724
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (24.97s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [578ab2cc-0eab-4572-8d30-0cabd99bfa92] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0119517s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-070600 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-070600 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-070600 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-070600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2d591e65-78b6-46fe-a982-192085573337] Pending
helpers_test.go:344: "sp-pod" [2d591e65-78b6-46fe-a982-192085573337] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2d591e65-78b6-46fe-a982-192085573337] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.0167647s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-070600 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-070600 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-070600 delete -f testdata/storage-provisioner/pod.yaml: (1.0210206s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-070600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cb59a5c7-88b0-4100-9243-83fcf07f137e] Pending
helpers_test.go:344: "sp-pod" [cb59a5c7-88b0-4100-9243-83fcf07f137e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cb59a5c7-88b0-4100-9243-83fcf07f137e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0125283s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-070600 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (21.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh "echo hello": (11.0229868s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh "cat /etc/hostname": (10.4668608s)
--- PASS: TestFunctional/parallel/SSHCmd (21.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (55.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.9921744s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh -n functional-070600 "sudo cat /home/docker/cp-test.txt"
E0229 18:05:31.706644    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh -n functional-070600 "sudo cat /home/docker/cp-test.txt": (9.2660267s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 cp functional-070600:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd2672250220\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 cp functional-070600:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd2672250220\001\cp-test.txt: (9.5240965s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh -n functional-070600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh -n functional-070600 "sudo cat /home/docker/cp-test.txt": (9.6854271s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (7.2107874s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh -n functional-070600 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh -n functional-070600 "sudo cat /tmp/does/not/exist/cp-test.txt": (10.3922608s)
--- PASS: TestFunctional/parallel/CpCmd (55.09s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (54.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-070600 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-sgsh7" [9aa7afd9-7fe6-479c-8299-1ee6ac54352e] Pending
helpers_test.go:344: "mysql-859648c796-sgsh7" [9aa7afd9-7fe6-479c-8299-1ee6ac54352e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-sgsh7" [9aa7afd9-7fe6-479c-8299-1ee6ac54352e] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 43.0295218s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-070600 exec mysql-859648c796-sgsh7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-070600 exec mysql-859648c796-sgsh7 -- mysql -ppassword -e "show databases;": exit status 1 (308.6234ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-070600 exec mysql-859648c796-sgsh7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-070600 exec mysql-859648c796-sgsh7 -- mysql -ppassword -e "show databases;": exit status 1 (269.9963ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-070600 exec mysql-859648c796-sgsh7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-070600 exec mysql-859648c796-sgsh7 -- mysql -ppassword -e "show databases;": exit status 1 (276.3892ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-070600 exec mysql-859648c796-sgsh7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-070600 exec mysql-859648c796-sgsh7 -- mysql -ppassword -e "show databases;": exit status 1 (256.2474ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-070600 exec mysql-859648c796-sgsh7 -- mysql -ppassword -e "show databases;"
E0229 18:10:31.716743    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (54.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (9.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/4356/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo cat /etc/test/nested/copy/4356/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo cat /etc/test/nested/copy/4356/hosts": (9.880046s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (9.88s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (56.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/4356.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo cat /etc/ssl/certs/4356.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo cat /etc/ssl/certs/4356.pem": (10.4802627s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/4356.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo cat /usr/share/ca-certificates/4356.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo cat /usr/share/ca-certificates/4356.pem": (9.8854407s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo cat /etc/ssl/certs/51391683.0": (9.567483s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/43562.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo cat /etc/ssl/certs/43562.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo cat /etc/ssl/certs/43562.pem": (8.8793281s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/43562.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo cat /usr/share/ca-certificates/43562.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo cat /usr/share/ca-certificates/43562.pem": (9.0099103s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (8.8901068s)
--- PASS: TestFunctional/parallel/CertSync (56.75s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-070600 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-070600 ssh "sudo systemctl is-active crio": exit status 1 (10.716028s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:05:24.313894    1448 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (2.3963683s)
--- PASS: TestFunctional/parallel/License (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (18.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-070600 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-070600 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-lhgd7" [05d639e4-94d5-4055-8f46-67892ce06296] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-lhgd7" [05d639e4-94d5-4055-8f46-67892ce06296] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 18.0122529s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (18.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 version --short
--- PASS: TestFunctional/parallel/Version/short (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 version -o=json --components: (7.3581749s)
--- PASS: TestFunctional/parallel/Version/components (7.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (6.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image ls --format short --alsologtostderr: (6.9379173s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-070600 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-070600
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-070600
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-070600 image ls --format short --alsologtostderr:
W0229 18:08:13.575494   10676 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 18:08:13.638367   10676 out.go:291] Setting OutFile to fd 1380 ...
I0229 18:08:13.643111   10676 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 18:08:13.643111   10676 out.go:304] Setting ErrFile to fd 1156...
I0229 18:08:13.643111   10676 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 18:08:13.665724   10676 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 18:08:13.666964   10676 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 18:08:13.667595   10676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
I0229 18:08:15.721546   10676 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 18:08:15.721546   10676 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:15.731503   10676 ssh_runner.go:195] Run: systemctl --version
I0229 18:08:15.731503   10676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
I0229 18:08:17.731126   10676 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 18:08:17.731126   10676 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:17.731298   10676 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
I0229 18:08:20.105550   10676 main.go:141] libmachine: [stdout =====>] : 172.26.52.106

                                                
                                                
I0229 18:08:20.105550   10676 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:20.111339   10676 sshutil.go:53] new ssh client: &{IP:172.26.52.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-070600\id_rsa Username:docker}
I0229 18:08:20.234438   10676 ssh_runner.go:235] Completed: systemctl --version: (4.5026845s)
I0229 18:08:20.246581   10676 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (6.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (6.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image ls --format table --alsologtostderr: (6.9103081s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-070600 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/google-containers/addon-resizer      | functional-070600 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 6913ed9ec8d00 | 42.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-070600 | 5317b17e4a1b7 | 30B    |
| docker.io/library/nginx                     | latest            | e4720093a3c13 | 187MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-070600 image ls --format table --alsologtostderr:
W0229 18:08:28.186663    1852 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 18:08:28.263518    1852 out.go:291] Setting OutFile to fd 1212 ...
I0229 18:08:28.264276    1852 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 18:08:28.264276    1852 out.go:304] Setting ErrFile to fd 1312...
I0229 18:08:28.264276    1852 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 18:08:28.287120    1852 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 18:08:28.287763    1852 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 18:08:28.288392    1852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
I0229 18:08:30.384897    1852 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 18:08:30.385022    1852 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:30.395486    1852 ssh_runner.go:195] Run: systemctl --version
I0229 18:08:30.395486    1852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
I0229 18:08:32.438511    1852 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 18:08:32.438511    1852 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:32.438511    1852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
I0229 18:08:34.818244    1852 main.go:141] libmachine: [stdout =====>] : 172.26.52.106

                                                
                                                
I0229 18:08:34.818244    1852 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:34.819737    1852 sshutil.go:53] new ssh client: &{IP:172.26.52.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-070600\id_rsa Username:docker}
I0229 18:08:34.924504    1852 ssh_runner.go:235] Completed: systemctl --version: (4.5287664s)
I0229 18:08:34.932230    1852 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (6.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (6.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image ls --format json --alsologtostderr: (6.8511372s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-070600 image ls --format json --alsologtostderr:
[{"id":"5317b17e4a1b7bc46be9a90fcf506fe8ddda27db56f7294fb62a3bbdf6cd5687","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-070600"],"size":"30"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"6e38f40d628db3002f56
17342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["regis
try.k8s.io/pause:latest"],"size":"240000"},{"id":"6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-070600"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-070600 image ls --format json --alsologtostderr:
W0229 18:08:21.361677    4064 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 18:08:21.429607    4064 out.go:291] Setting OutFile to fd 1356 ...
I0229 18:08:21.429863    4064 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 18:08:21.429863    4064 out.go:304] Setting ErrFile to fd 1484...
I0229 18:08:21.429863    4064 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 18:08:21.442218    4064 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 18:08:21.443546    4064 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 18:08:21.443919    4064 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
I0229 18:08:23.446433    4064 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 18:08:23.446433    4064 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:23.462220    4064 ssh_runner.go:195] Run: systemctl --version
I0229 18:08:23.462758    4064 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
I0229 18:08:25.474139    4064 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 18:08:25.474139    4064 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:25.474139    4064 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
I0229 18:08:27.869144    4064 main.go:141] libmachine: [stdout =====>] : 172.26.52.106

                                                
                                                
I0229 18:08:27.879747    4064 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:27.880142    4064 sshutil.go:53] new ssh client: &{IP:172.26.52.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-070600\id_rsa Username:docker}
I0229 18:08:27.989969    4064 ssh_runner.go:235] Completed: systemctl --version: (4.5274966s)
I0229 18:08:28.002783    4064 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (6.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (6.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image ls --format yaml --alsologtostderr: (6.8704552s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-070600 image ls --format yaml --alsologtostderr:
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-070600
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: e4720093a3c1381245b53a5a51b417963b3c4472d3f47fc301930a4f3b17666a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 5317b17e4a1b7bc46be9a90fcf506fe8ddda27db56f7294fb62a3bbdf6cd5687
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-070600
size: "30"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-070600 image ls --format yaml --alsologtostderr:
W0229 18:08:14.467776    7836 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 18:08:14.527895    7836 out.go:291] Setting OutFile to fd 584 ...
I0229 18:08:14.542058    7836 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 18:08:14.542058    7836 out.go:304] Setting ErrFile to fd 1292...
I0229 18:08:14.542058    7836 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 18:08:14.556235    7836 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 18:08:14.556556    7836 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 18:08:14.556947    7836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
I0229 18:08:16.591696    7836 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 18:08:16.593978    7836 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:16.604075    7836 ssh_runner.go:195] Run: systemctl --version
I0229 18:08:16.604259    7836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
I0229 18:08:18.626904    7836 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 18:08:18.626904    7836 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:18.626904    7836 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
I0229 18:08:21.037746    7836 main.go:141] libmachine: [stdout =====>] : 172.26.52.106

                                                
                                                
I0229 18:08:21.037746    7836 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:21.039527    7836 sshutil.go:53] new ssh client: &{IP:172.26.52.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-070600\id_rsa Username:docker}
I0229 18:08:21.150942    7836 ssh_runner.go:235] Completed: systemctl --version: (4.5464993s)
I0229 18:08:21.159345    7836 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (6.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (24.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-070600 ssh pgrep buildkitd: exit status 1 (8.9960866s)

                                                
                                                
** stderr ** 
	W0229 18:08:20.517922   13004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image build -t localhost/my-image:functional-070600 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image build -t localhost/my-image:functional-070600 testdata\build --alsologtostderr: (8.6040435s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-070600 image build -t localhost/my-image:functional-070600 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 72398ead05e2
Removing intermediate container 72398ead05e2
---> d2691ca69353
Step 3/3 : ADD content.txt /
---> f92b8b382023
Successfully built f92b8b382023
Successfully tagged localhost/my-image:functional-070600
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-070600 image build -t localhost/my-image:functional-070600 testdata\build --alsologtostderr:
W0229 18:08:29.514682   13680 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0229 18:08:29.566268   13680 out.go:291] Setting OutFile to fd 1056 ...
I0229 18:08:29.586562   13680 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 18:08:29.586562   13680 out.go:304] Setting ErrFile to fd 1280...
I0229 18:08:29.586562   13680 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0229 18:08:29.609757   13680 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 18:08:29.625651   13680 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0229 18:08:29.626284   13680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
I0229 18:08:31.643542   13680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 18:08:31.643542   13680 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:31.652643   13680 ssh_runner.go:195] Run: systemctl --version
I0229 18:08:31.652643   13680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-070600 ).state
I0229 18:08:33.659341   13680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0229 18:08:33.670007   13680 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:33.670112   13680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-070600 ).networkadapters[0]).ipaddresses[0]
I0229 18:08:36.077040   13680 main.go:141] libmachine: [stdout =====>] : 172.26.52.106

                                                
                                                
I0229 18:08:36.087688   13680 main.go:141] libmachine: [stderr =====>] : 
I0229 18:08:36.087811   13680 sshutil.go:53] new ssh client: &{IP:172.26.52.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-070600\id_rsa Username:docker}
I0229 18:08:36.187845   13680 ssh_runner.go:235] Completed: systemctl --version: (4.5349497s)
I0229 18:08:36.187845   13680 build_images.go:151] Building image from path: C:\Users\jenkins.minikube5\AppData\Local\Temp\build.1274240514.tar
I0229 18:08:36.199775   13680 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0229 18:08:36.225903   13680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1274240514.tar
I0229 18:08:36.234902   13680 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1274240514.tar: stat -c "%s %y" /var/lib/minikube/build/build.1274240514.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1274240514.tar': No such file or directory
I0229 18:08:36.234957   13680 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\AppData\Local\Temp\build.1274240514.tar --> /var/lib/minikube/build/build.1274240514.tar (3072 bytes)
I0229 18:08:36.296028   13680 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1274240514
I0229 18:08:36.318934   13680 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1274240514 -xf /var/lib/minikube/build/build.1274240514.tar
I0229 18:08:36.333072   13680 docker.go:360] Building image: /var/lib/minikube/build/build.1274240514
I0229 18:08:36.340771   13680 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-070600 /var/lib/minikube/build/build.1274240514
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0229 18:08:37.935368   13680 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-070600 /var/lib/minikube/build/build.1274240514: (1.5944679s)
I0229 18:08:37.943342   13680 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1274240514
I0229 18:08:37.970968   13680 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1274240514.tar
I0229 18:08:37.990203   13680 build_images.go:207] Built localhost/my-image:functional-070600 from C:\Users\jenkins.minikube5\AppData\Local\Temp\build.1274240514.tar
I0229 18:08:37.990203   13680 build_images.go:123] succeeded building to: functional-070600
I0229 18:08:37.990203   13680 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image ls: (6.64094s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (24.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.0299164s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-070600
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (21.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image load --daemon gcr.io/google-containers/addon-resizer:functional-070600 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image load --daemon gcr.io/google-containers/addon-resizer:functional-070600 --alsologtostderr: (14.5072302s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image ls: (7.3983399s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (21.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (12.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 service list: (12.6963361s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (12.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (18.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image load --daemon gcr.io/google-containers/addon-resizer:functional-070600 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image load --daemon gcr.io/google-containers/addon-resizer:functional-070600 --alsologtostderr: (11.6931131s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image ls: (7.1724264s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (18.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (11.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 service list -o json: (11.8981261s)
functional_test.go:1490: Took "11.90124s" to run "out/minikube-windows-amd64.exe -p functional-070600 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (11.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (27.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.7070866s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-070600
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image load --daemon gcr.io/google-containers/addon-resizer:functional-070600 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image load --daemon gcr.io/google-containers/addon-resizer:functional-070600 --alsologtostderr: (15.8931226s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image ls: (7.8896271s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (27.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-070600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-070600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-070600 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-070600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 800: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 7908: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-070600 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-070600 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [912aa981-e6e1-45b0-aa8d-a935d79da13d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [912aa981-e6e1-45b0-aa8d-a935d79da13d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.02184s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image save gcr.io/google-containers/addon-resizer:functional-070600 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image save gcr.io/google-containers/addon-resizer:functional-070600 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (8.5812271s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-070600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 13432: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.7478384s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (15.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image rm gcr.io/google-containers/addon-resizer:functional-070600 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image rm gcr.io/google-containers/addon-resizer:functional-070600 --alsologtostderr: (7.9171414s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image ls
E0229 18:06:54.891006    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image ls: (7.5738667s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (15.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (8.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (8.0141637s)
functional_test.go:1311: Took "8.0232338s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "240.7158ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (8.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (7.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (7.6973634s)
functional_test.go:1362: Took "7.7066254s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "236.2934ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (7.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (18.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.0015159s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image ls: (8.673871s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (18.68s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (40.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-070600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-070600"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-070600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-070600": (27.2370525s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-070600 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-070600 docker-env | Invoke-Expression ; docker images": (13.2803143s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (40.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-070600
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 image save --daemon gcr.io/google-containers/addon-resizer:functional-070600 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 image save --daemon gcr.io/google-containers/addon-resizer:functional-070600 --alsologtostderr: (8.6202661s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-070600
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.01s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 update-context --alsologtostderr -v=2: (2.3246798s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 update-context --alsologtostderr -v=2: (2.311298s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-070600 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-070600 update-context --alsologtostderr -v=2: (2.2903481s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.30s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.43s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-070600
--- PASS: TestFunctional/delete_addon-resizer_images (0.43s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-070600
--- PASS: TestFunctional/delete_my-image_image (0.17s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-070600
--- PASS: TestFunctional/delete_minikube_cached_images (0.16s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (176.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-953700 --driver=hyperv
E0229 18:15:23.044762    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:15:23.072068    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:15:23.096361    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:15:23.123481    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:15:23.166006    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:15:23.256037    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:15:23.425511    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:15:23.750323    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:15:24.394076    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:15:25.687604    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:15:28.260534    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:15:31.739576    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 18:15:33.392749    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:15:43.644711    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:16:04.138801    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-953700 --driver=hyperv: (2m56.1319983s)
--- PASS: TestImageBuild/serial/Setup (176.14s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (8.36s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-953700
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-953700: (8.3628302s)
--- PASS: TestImageBuild/serial/NormalBuild (8.36s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (7.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-953700
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-953700: (7.5322915s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (7.53s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (6.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-953700
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-953700: (6.8702197s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (6.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (6.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-953700
E0229 18:16:45.113418    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-953700: (6.7726691s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (6.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (219.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-727900 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0229 18:30:23.109450    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:30:31.791985    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 18:31:46.298570    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-727900 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m39.5236607s)
--- PASS: TestJSONOutput/start/Command (219.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.12s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-727900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-727900 --output=json --user=testUser: (7.1144284s)
--- PASS: TestJSONOutput/pause/Command (7.12s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.13s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-727900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-727900 --output=json --user=testUser: (7.1245204s)
--- PASS: TestJSONOutput/unpause/Command (7.13s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (28.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-727900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-727900 --output=json --user=testUser: (28.7534927s)
--- PASS: TestJSONOutput/stop/Command (28.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-812600 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-812600 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (207.1885ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1e586e81-0563-4db3-8bf2-490f69cc1a75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-812600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a21cd9a7-ba3d-4ec2-a5bf-c6dc584f7506","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube5\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"e9c0fc34-b1e3-4f8d-9af8-0eba93dcb943","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9231fe6b-b3f7-498a-8d7d-4b0b7186b4ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"fa9d3fac-05ff-49e4-83a6-cbce7eb3a74b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18259"}}
	{"specversion":"1.0","id":"c093ce9a-4b7c-4f17-8984-ab8b3797ab14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a797e9a5-6b90-4c7e-b61e-46385294c6fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:32:45.855230    3404 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-812600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-812600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-812600: (1.0989506s)
--- PASS: TestErrorJSONOutput (1.32s)

                                                
                                    
x
+
TestMainNoArgs (0.22s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.22s)

                                                
                                    
x
+
TestMinikubeProfile (459.03s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-868100 --driver=hyperv
E0229 18:35:23.118611    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:35:31.794697    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-868100 --driver=hyperv: (2m54.5584149s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-871300 --driver=hyperv
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-871300 --driver=hyperv: (2m56.9430362s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-868100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (13.5429134s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-871300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (13.6155814s)
helpers_test.go:175: Cleaning up "second-871300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-871300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-871300: (37.4358544s)
helpers_test.go:175: Cleaning up "first-868100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-868100
E0229 18:40:15.007694    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 18:40:23.132633    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-868100: (42.1485005s)
--- PASS: TestMinikubeProfile (459.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (139.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-680500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0229 18:40:31.814914    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-680500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m18.8237009s)
--- PASS: TestMountStart/serial/StartWithMountFirst (139.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (8.8s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-680500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-680500 ssh -- ls /minikube-host: (8.8020455s)
--- PASS: TestMountStart/serial/VerifyMountFirst (8.80s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (386.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-421600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0229 18:48:26.358978    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:50:23.163541    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:50:31.848463    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
multinode_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-421600 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m4.4803861s)
multinode_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 status --alsologtostderr: (21.5455198s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (386.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- rollout status deployment/busybox: (2.7135588s)
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- exec busybox-5b5d89c9d6-4lvtb -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- exec busybox-5b5d89c9d6-4lvtb -- nslookup kubernetes.io: (1.7928348s)
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- exec busybox-5b5d89c9d6-dk9k8 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- exec busybox-5b5d89c9d6-4lvtb -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- exec busybox-5b5d89c9d6-dk9k8 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- exec busybox-5b5d89c9d6-4lvtb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-421600 -- exec busybox-5b5d89c9d6-dk9k8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.28s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (203.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-421600 -v 3 --alsologtostderr
E0229 18:55:23.182434    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 18:55:31.867163    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 18:56:55.066200    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
multinode_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-421600 -v 3 --alsologtostderr: (2m50.6043676s)
multinode_test.go:117: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 status --alsologtostderr: (32.9604745s)
--- PASS: TestMultiNode/serial/AddNode (203.56s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-421600 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (6.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (6.9235209s)
--- PASS: TestMultiNode/serial/ProfileList (6.92s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (328.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 status --output json --alsologtostderr: (32.7450167s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 cp testdata\cp-test.txt multinode-421600:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 cp testdata\cp-test.txt multinode-421600:/home/docker/cp-test.txt: (8.6908159s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600 "sudo cat /home/docker/cp-test.txt": (8.6525556s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1252078008\001\cp-test_multinode-421600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1252078008\001\cp-test_multinode-421600.txt: (8.5829919s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600 "sudo cat /home/docker/cp-test.txt": (8.6287573s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600:/home/docker/cp-test.txt multinode-421600-m02:/home/docker/cp-test_multinode-421600_multinode-421600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600:/home/docker/cp-test.txt multinode-421600-m02:/home/docker/cp-test_multinode-421600_multinode-421600-m02.txt: (15.3355769s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600 "sudo cat /home/docker/cp-test.txt": (8.6796046s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m02 "sudo cat /home/docker/cp-test_multinode-421600_multinode-421600-m02.txt"
E0229 19:00:23.198862    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m02 "sudo cat /home/docker/cp-test_multinode-421600_multinode-421600-m02.txt": (8.6514229s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600:/home/docker/cp-test.txt multinode-421600-m03:/home/docker/cp-test_multinode-421600_multinode-421600-m03.txt
E0229 19:00:31.885962    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600:/home/docker/cp-test.txt multinode-421600-m03:/home/docker/cp-test_multinode-421600_multinode-421600-m03.txt: (15.1656509s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600 "sudo cat /home/docker/cp-test.txt": (8.6962573s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m03 "sudo cat /home/docker/cp-test_multinode-421600_multinode-421600-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m03 "sudo cat /home/docker/cp-test_multinode-421600_multinode-421600-m03.txt": (8.7853895s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 cp testdata\cp-test.txt multinode-421600-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 cp testdata\cp-test.txt multinode-421600-m02:/home/docker/cp-test.txt: (8.7387511s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m02 "sudo cat /home/docker/cp-test.txt": (8.6444569s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1252078008\001\cp-test_multinode-421600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1252078008\001\cp-test_multinode-421600-m02.txt: (8.6948466s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m02 "sudo cat /home/docker/cp-test.txt": (8.7009144s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600-m02:/home/docker/cp-test.txt multinode-421600:/home/docker/cp-test_multinode-421600-m02_multinode-421600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600-m02:/home/docker/cp-test.txt multinode-421600:/home/docker/cp-test_multinode-421600-m02_multinode-421600.txt: (14.8621613s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m02 "sudo cat /home/docker/cp-test.txt": (8.4580498s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600 "sudo cat /home/docker/cp-test_multinode-421600-m02_multinode-421600.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600 "sudo cat /home/docker/cp-test_multinode-421600-m02_multinode-421600.txt": (8.5179452s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600-m02:/home/docker/cp-test.txt multinode-421600-m03:/home/docker/cp-test_multinode-421600-m02_multinode-421600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600-m02:/home/docker/cp-test.txt multinode-421600-m03:/home/docker/cp-test_multinode-421600-m02_multinode-421600-m03.txt: (14.8004997s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m02 "sudo cat /home/docker/cp-test.txt": (8.4832954s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m03 "sudo cat /home/docker/cp-test_multinode-421600-m02_multinode-421600-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m03 "sudo cat /home/docker/cp-test_multinode-421600-m02_multinode-421600-m03.txt": (8.4461745s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 cp testdata\cp-test.txt multinode-421600-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 cp testdata\cp-test.txt multinode-421600-m03:/home/docker/cp-test.txt: (8.4822127s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m03 "sudo cat /home/docker/cp-test.txt": (8.429191s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1252078008\001\cp-test_multinode-421600-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile1252078008\001\cp-test_multinode-421600-m03.txt: (8.4719193s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m03 "sudo cat /home/docker/cp-test.txt": (8.4799962s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600-m03:/home/docker/cp-test.txt multinode-421600:/home/docker/cp-test_multinode-421600-m03_multinode-421600.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600-m03:/home/docker/cp-test.txt multinode-421600:/home/docker/cp-test_multinode-421600-m03_multinode-421600.txt: (14.8515499s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m03 "sudo cat /home/docker/cp-test.txt": (8.3736008s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600 "sudo cat /home/docker/cp-test_multinode-421600-m03_multinode-421600.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600 "sudo cat /home/docker/cp-test_multinode-421600-m03_multinode-421600.txt": (8.4909089s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600-m03:/home/docker/cp-test.txt multinode-421600-m02:/home/docker/cp-test_multinode-421600-m03_multinode-421600-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 cp multinode-421600-m03:/home/docker/cp-test.txt multinode-421600-m02:/home/docker/cp-test_multinode-421600-m03_multinode-421600-m02.txt: (14.804954s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m03 "sudo cat /home/docker/cp-test.txt": (8.3758958s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m02 "sudo cat /home/docker/cp-test_multinode-421600-m03_multinode-421600-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 ssh -n multinode-421600-m02 "sudo cat /home/docker/cp-test_multinode-421600-m03_multinode-421600-m02.txt": (8.4577524s)
--- PASS: TestMultiNode/serial/CopyFile (328.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 node stop m03: (18.0638387s)
multinode_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-421600 status: exit status 7 (23.4242763s)

                                                
                                                
-- stdout --
	multinode-421600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-421600-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-421600-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:04:36.624442    7012 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 status --alsologtostderr
E0229 19:05:06.421109    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 19:05:23.211466    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-421600 status --alsologtostderr: exit status 7 (23.4919523s)

                                                
                                                
-- stdout --
	multinode-421600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-421600-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-421600-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:05:00.037284    9600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 19:05:00.110266    9600 out.go:291] Setting OutFile to fd 1812 ...
	I0229 19:05:00.110266    9600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:05:00.110266    9600 out.go:304] Setting ErrFile to fd 1816...
	I0229 19:05:00.110266    9600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 19:05:00.118127    9600 out.go:298] Setting JSON to false
	I0229 19:05:00.118127    9600 mustload.go:65] Loading cluster: multinode-421600
	I0229 19:05:00.118127    9600 notify.go:220] Checking for updates...
	I0229 19:05:00.123326    9600 config.go:182] Loaded profile config "multinode-421600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 19:05:00.123326    9600 status.go:255] checking status of multinode-421600 ...
	I0229 19:05:00.124407    9600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:05:02.035364    9600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:05:02.035364    9600 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:05:02.035474    9600 status.go:330] multinode-421600 host status = "Running" (err=<nil>)
	I0229 19:05:02.035647    9600 host.go:66] Checking if "multinode-421600" exists ...
	I0229 19:05:02.036594    9600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:05:04.005448    9600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:05:04.005448    9600 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:05:04.015210    9600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:05:06.336726    9600 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 19:05:06.336726    9600 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:05:06.344947    9600 host.go:66] Checking if "multinode-421600" exists ...
	I0229 19:05:06.354660    9600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 19:05:06.354660    9600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600 ).state
	I0229 19:05:08.259186    9600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:05:08.259186    9600 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:05:08.259186    9600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600 ).networkadapters[0]).ipaddresses[0]
	I0229 19:05:10.594835    9600 main.go:141] libmachine: [stdout =====>] : 172.26.62.28
	
	I0229 19:05:10.596535    9600 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:05:10.596782    9600 sshutil.go:53] new ssh client: &{IP:172.26.62.28 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600\id_rsa Username:docker}
	I0229 19:05:10.706052    9600 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3511501s)
	I0229 19:05:10.715488    9600 ssh_runner.go:195] Run: systemctl --version
	I0229 19:05:10.733246    9600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:05:10.758544    9600 kubeconfig.go:92] found "multinode-421600" server: "https://172.26.62.28:8443"
	I0229 19:05:10.758628    9600 api_server.go:166] Checking apiserver status ...
	I0229 19:05:10.766507    9600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0229 19:05:10.800856    9600 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2121/cgroup
	W0229 19:05:10.819141    9600 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2121/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0229 19:05:10.829218    9600 ssh_runner.go:195] Run: ls
	I0229 19:05:10.838442    9600 api_server.go:253] Checking apiserver healthz at https://172.26.62.28:8443/healthz ...
	I0229 19:05:10.845481    9600 api_server.go:279] https://172.26.62.28:8443/healthz returned 200:
	ok
	I0229 19:05:10.845481    9600 status.go:421] multinode-421600 apiserver status = Running (err=<nil>)
	I0229 19:05:10.845481    9600 status.go:257] multinode-421600 status: &{Name:multinode-421600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0229 19:05:10.846779    9600 status.go:255] checking status of multinode-421600-m02 ...
	I0229 19:05:10.846856    9600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:05:12.771235    9600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:05:12.771235    9600 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:05:12.771326    9600 status.go:330] multinode-421600-m02 host status = "Running" (err=<nil>)
	I0229 19:05:12.771326    9600 host.go:66] Checking if "multinode-421600-m02" exists ...
	I0229 19:05:12.772229    9600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:05:14.735061    9600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:05:14.735061    9600 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:05:14.735168    9600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:05:17.047439    9600 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 19:05:17.047439    9600 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:05:17.057244    9600 host.go:66] Checking if "multinode-421600-m02" exists ...
	I0229 19:05:17.065437    9600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0229 19:05:17.065437    9600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m02 ).state
	I0229 19:05:19.006532    9600 main.go:141] libmachine: [stdout =====>] : Running
	
	I0229 19:05:19.006532    9600 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:05:19.008510    9600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-421600-m02 ).networkadapters[0]).ipaddresses[0]
	I0229 19:05:21.333276    9600 main.go:141] libmachine: [stdout =====>] : 172.26.56.47
	
	I0229 19:05:21.333276    9600 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:05:21.343767    9600 sshutil.go:53] new ssh client: &{IP:172.26.56.47 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-421600-m02\id_rsa Username:docker}
	I0229 19:05:21.443533    9600 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3778529s)
	I0229 19:05:21.452176    9600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0229 19:05:21.472833    9600 status.go:257] multinode-421600-m02 status: &{Name:multinode-421600-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0229 19:05:21.472833    9600 status.go:255] checking status of multinode-421600-m03 ...
	I0229 19:05:21.476219    9600 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-421600-m03 ).state
	I0229 19:05:23.396288    9600 main.go:141] libmachine: [stdout =====>] : Off
	
	I0229 19:05:23.401108    9600 main.go:141] libmachine: [stderr =====>] : 
	I0229 19:05:23.401108    9600 status.go:330] multinode-421600-m03 host status = "Stopped" (err=<nil>)
	I0229 19:05:23.401108    9600 status.go:343] host is not running, skipping remaining checks
	I0229 19:05:23.401108    9600 status.go:257] multinode-421600-m03 status: &{Name:multinode-421600-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (65.00s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (150.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 node start m03 --alsologtostderr
E0229 19:05:31.900117    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 node start m03 --alsologtostderr: (1m57.1424187s)
multinode_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 status
multinode_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 status: (32.9917599s)
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (150.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (57.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 node delete m03: (35.701317s)
multinode_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-421600 status --alsologtostderr
multinode_test.go:428: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-421600 status --alsologtostderr: (21.5351713s)
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (57.55s)

                                                
                                    
x
+
TestPreload (438.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-051200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0229 19:20:23.266305    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 19:20:31.946650    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 19:21:46.484344    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-051200 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (3m30.2078489s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-051200 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-051200 image pull gcr.io/k8s-minikube/busybox: (7.6596867s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-051200
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-051200: (33.5828227s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-051200 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0229 19:25:23.287238    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-051200 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m23.5973636s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-051200 image list
E0229 19:25:31.970250    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-051200 image list: (6.813085s)
helpers_test.go:175: Cleaning up "test-preload-051200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-051200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-051200: (36.7059866s)
--- PASS: TestPreload (438.57s)

                                                
                                    
x
+
TestScheduledStopWindows (308.68s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-446100 --memory=2048 --driver=hyperv
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-446100 --memory=2048 --driver=hyperv: (2m59.466283s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-446100 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-446100 --schedule 5m: (9.8727835s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-446100 -n scheduled-stop-446100
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-446100 -n scheduled-stop-446100: exit status 1 (10.0241387s)

                                                
                                                
** stderr ** 
	W0229 19:29:20.254685   12908 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-446100 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-446100 -- sudo systemctl show minikube-scheduled-stop --no-page: (8.7824164s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-446100 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-446100 --schedule 5s: (9.8947826s)
E0229 19:30:15.185555    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
E0229 19:30:23.299442    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 19:30:31.990618    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-446100
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-446100: exit status 7 (2.2180381s)

                                                
                                                
-- stdout --
	scheduled-stop-446100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:30:48.966274    6552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-446100 -n scheduled-stop-446100
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-446100 -n scheduled-stop-446100: exit status 7 (2.1582306s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:30:51.180759    5216 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-446100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-446100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-446100: (26.2623566s)
--- PASS: TestScheduledStopWindows (308.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (793.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1573341577.exe start -p running-upgrade-764600 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1573341577.exe start -p running-upgrade-764600 --memory=2200 --vm-driver=hyperv: (5m19.4995735s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-764600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0229 19:46:55.251167    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-764600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m45.4600036s)
helpers_test.go:175: Cleaning up "running-upgrade-764600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-764600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-764600: (1m7.2241996s)
--- PASS: TestRunningBinaryUpgrade (793.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (861.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1177638624.exe start -p stopped-upgrade-829200 --memory=2200 --vm-driver=hyperv
E0229 19:38:26.547458    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 19:40:23.338441    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1177638624.exe start -p stopped-upgrade-829200 --memory=2200 --vm-driver=hyperv: (7m24.5416477s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1177638624.exe -p stopped-upgrade-829200 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1177638624.exe -p stopped-upgrade-829200 stop: (33.4940211s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-829200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0229 19:45:23.355853    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
E0229 19:45:32.033250    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-829200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m23.4132459s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (861.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-829200
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-829200: (9.2904157s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.30s)

                                                
                                    
x
+
TestPause/serial/Start (196.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-027300 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-027300 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (3m16.9249926s)
--- PASS: TestPause/serial/Start (196.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-737000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-737000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (223.8226ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-737000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 19:54:41.844491   10988 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (334.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-027300 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-027300 --alsologtostderr -v=1 --driver=hyperv: (5m34.4693163s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (334.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (423.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperv
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperv: (7m3.4467528s)
--- PASS: TestNetworkPlugins/group/auto/Start (423.45s)

                                                
                                    
x
+
TestPause/serial/Pause (7.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-027300 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-027300 --alsologtostderr -v=5: (7.677112s)
--- PASS: TestPause/serial/Pause (7.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (11.66s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-027300 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-027300 --output=json --layout=cluster: exit status 2 (11.6628211s)

                                                
                                                
-- stdout --
	{"Name":"pause-027300","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-027300","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 20:01:37.413532    5284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (11.66s)

                                                
                                    
x
+
TestPause/serial/Unpause (7.33s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-027300 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-027300 --alsologtostderr -v=5: (7.3309281s)
--- PASS: TestPause/serial/Unpause (7.33s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (7.45s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-027300 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-027300 --alsologtostderr -v=5: (7.4545227s)
--- PASS: TestPause/serial/PauseAgain (7.45s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (42.46s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-027300 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-027300 --alsologtostderr -v=5: (42.4573968s)
--- PASS: TestPause/serial/DeletePaused (42.46s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (10.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.4958273s)
--- PASS: TestPause/serial/VerifyDeletedResources (10.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (446.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=hyperv
E0229 20:03:35.314521    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=hyperv: (7m26.2889011s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (446.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (8.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-863900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-863900 "pgrep -a kubelet": (8.9759306s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (8.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (16.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-863900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rmdp8" [ac004516-5f0e-41bd-a478-a593986d7d2d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rmdp8" [ac004516-5f0e-41bd-a478-a593986d7d2d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 16.016026s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (16.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-863900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-863900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-863900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-863900 "pgrep -a kubelet"
E0229 20:10:32.112783    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-268800\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-863900 "pgrep -a kubelet": (10.0145659s)
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (17.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-863900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rhsg6" [ec935a86-f6df-4a10-97d6-6b270c6ed7f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rhsg6" [ec935a86-f6df-4a10-97d6-6b270c6ed7f1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 17.0201463s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (17.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-863900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-863900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-863900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (244.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperv
E0229 20:11:46.691292    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperv: (4m4.4280583s)
--- PASS: TestNetworkPlugins/group/false/Start (244.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (219.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperv
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperv: (3m39.0718227s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (219.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-863900 "pgrep -a kubelet"
E0229 20:15:09.233986    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-863900\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-863900 "pgrep -a kubelet": (10.1367628s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (16.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-863900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bwkjv" [e11d9501-3792-4e9d-bf95-25f1f415bea8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bwkjv" [e11d9501-3792-4e9d-bf95-25f1f415bea8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 16.0115052s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (16.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (236.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperv
E0229 20:15:23.456744    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-070600\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperv: (3m56.4267696s)
--- PASS: TestNetworkPlugins/group/flannel/Start (236.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-863900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-863900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-863900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-863900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-863900 "pgrep -a kubelet": (10.4542773s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-863900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qh9jd" [872d4bab-a445-4ebb-a56c-e269821b5462] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0229 20:17:12.124849    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-863900\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-56589dfd74-qh9jd" [872d4bab-a445-4ebb-a56c-e269821b5462] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.018735s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-863900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-863900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-863900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-j448q" [ec6d813c-17d1-4440-a261-af13062c6558] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0123595s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (10.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-863900 "pgrep -a kubelet"
E0229 20:19:28.149248    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-863900\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p flannel-863900 "pgrep -a kubelet": (10.4765714s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (10.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-863900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gf2fp" [4a68a013-018b-4c25-9f42-5e1870a7a95d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gf2fp" [4a68a013-018b-4c25-9f42-5e1870a7a95d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.0256068s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-863900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-863900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-863900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (243.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperv
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-863900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperv: (4m3.3805352s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (243.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (8.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-863900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-863900 "pgrep -a kubelet": (8.7810159s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (8.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (22.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-863900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-l9x4m" [5569a16d-2e5c-45f3-8225-77682ef2ca65] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-l9x4m" [5569a16d-2e5c-45f3-8225-77682ef2ca65] Running
E0229 20:26:36.601141    4356 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\false-863900\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 22.0077768s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (22.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-863900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-863900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.30s)

                                                
                                    

Test skip (33/247)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-070600 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-070600 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 2124: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-070600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-070600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0351868s)

                                                
                                                
-- stdout --
	* [functional-070600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:07:06.417641    3968 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 18:07:06.486154    3968 out.go:291] Setting OutFile to fd 1248 ...
	I0229 18:07:06.486785    3968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:07:06.486785    3968 out.go:304] Setting ErrFile to fd 612...
	I0229 18:07:06.486785    3968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:07:06.506130    3968 out.go:298] Setting JSON to false
	I0229 18:07:06.509391    3968 start.go:129] hostinfo: {"hostname":"minikube5","uptime":51763,"bootTime":1709178262,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 18:07:06.509391    3968 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 18:07:06.511088    3968 out.go:177] * [functional-070600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 18:07:06.511823    3968 notify.go:220] Checking for updates...
	I0229 18:07:06.512386    3968 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 18:07:06.513214    3968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:07:06.513558    3968 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 18:07:06.514582    3968 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:07:06.515296    3968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:07:06.516865    3968 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:07:06.517816    3968 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-070600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-070600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0344333s)

                                                
                                                
-- stdout --
	* [functional-070600] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18259
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0229 18:07:11.442479    9356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0229 18:07:11.498811    9356 out.go:291] Setting OutFile to fd 1344 ...
	I0229 18:07:11.498811    9356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:07:11.498811    9356 out.go:304] Setting ErrFile to fd 1348...
	I0229 18:07:11.498811    9356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0229 18:07:11.522148    9356 out.go:298] Setting JSON to false
	I0229 18:07:11.526477    9356 start.go:129] hostinfo: {"hostname":"minikube5","uptime":51768,"bootTime":1709178262,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4046 Build 19045.4046","kernelVersion":"10.0.19045.4046 Build 19045.4046","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0229 18:07:11.527173    9356 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0229 18:07:11.528596    9356 out.go:177] * [functional-070600] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4046 Build 19045.4046
	I0229 18:07:11.528709    9356 notify.go:220] Checking for updates...
	I0229 18:07:11.529446    9356 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0229 18:07:11.529446    9356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0229 18:07:11.531069    9356 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0229 18:07:11.531636    9356 out.go:177]   - MINIKUBE_LOCATION=18259
	I0229 18:07:11.532437    9356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0229 18:07:11.534275    9356 config.go:182] Loaded profile config "functional-070600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0229 18:07:11.536190    9356 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-863900 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
W0229 19:31:21.114543    2616 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
W0229 19:31:21.352996    2496 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
W0229 19:31:21.564915   11280 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> host: crictl pods:
W0229 19:31:21.953596    9428 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
W0229 19:31:22.221741    5856 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
W0229 19:31:23.794736    7116 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: ip a s:
W0229 19:31:24.054154    8768 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: ip r s:
W0229 19:31:24.303933    1160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
W0229 19:31:24.587113    9404 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
W0229 19:31:24.852250    7620 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-863900" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
W0229 19:31:26.480062    9568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
W0229 19:31:26.702825    4280 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
W0229 19:31:26.961854   13484 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
W0229 19:31:27.176707    2636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
W0229 19:31:27.389173    9276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-863900

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
W0229 19:31:27.858800    9172 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
W0229 19:31:28.085395   13848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
W0229 19:31:28.295362    5972 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: docker system info:
W0229 19:31:28.512464   14324 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
W0229 19:31:28.726053   13616 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
W0229 19:31:28.968291    7984 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
W0229 19:31:29.183976    4812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
W0229 19:31:29.396533   10428 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
W0229 19:31:29.600921    9288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
W0229 19:31:29.836618    6180 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
W0229 19:31:30.075036    6244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
W0229 19:31:30.288991   11184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
W0229 19:31:30.498740    2692 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
W0229 19:31:30.726671   13916 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
W0229 19:31:30.974407    7348 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
W0229 19:31:31.190432   10160 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
W0229 19:31:31.407271    2504 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                

                                                
                                                
>>> host: crio config:
W0229 19:31:31.623056    2944 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-863900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-863900"

                                                
                                                
----------------------- debugLogs end: cilium-863900 [took: 12.2255652s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-863900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-863900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cilium-863900: (1.1427973s)
--- SKIP: TestNetworkPlugins/group/cilium (13.37s)

                                                
                                    
Copied to clipboard